perm filename ARPA.PUB[ESS,JMC] blob sn#188575 filedate 1975-11-24 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00027 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00003 00002	.!SPACES ← "#####################################################" 
C00006 00003	.CENTER CRBREAK GREEKS
C00008 00004	ABSTRACT
C00013 00005	.INSERT CONTENTS
C00022 00006	.GROUP
C00025 00007	.SS ROBOTICS
C00079 00008	.SSS REFERENCES
C00086 00009	.SS  MATHEMATICAL THEORY OF COMPUTATION
C00098 00010	.SS AUTOMATIC DEDUCTION
C00114 00011	.SS FACILITIES
C00117 00012	.SS PROJECT PUBLICATIONS
C00123 00013	>> A. W. Biermann and J. A. Feldman, "On the Synthesis of Finite-state Machines from Samples
C00127 00014	.SKIP TO COLUMN 1
C00135 00015	.SKIP TO COLUMN 1
C00140 00016	.SKIP TO COLUMN 1
C00169 00017	                                1972
C00203 00018	.SS BUDGET
C00205 00019	.SKIP TO COLUMN 1
C00207 00020	.SKIP TO COLUMN 1
C00210 00021	,SKIP TO COLUMN 1
C00213 00022	.SKIP TO COLUMN 1
C00216 00023	.SEC HEURISTIC PROGRAMMING PROJECT
C00217 00024	.SEC NETWORK PROTOCOL DEVELOPMENT PROJECT
C00218 00025	.SEC TOTAL BUDGET
C00219 00026	.SEC COGNIZANT PERSONNEL
C00221 00027	.BACK
C00222 ENDMK
C⊗;
.!SPACES ← "#####################################################" ;
.!HYPHENS ← "--------------------------------------------------------------" ;
.MACRO FRACTION(NUM, DEN) ; ⊂
.TURN ON "{↑↓[]&#" ;
.	N ← "NUM" ; D ← "DEN" ;
.	LN ← LENGTH(N) ; LD ← LENGTH(D) ;
.	IF LN > LD THEN START D ← !SPACES[1 TO (LN-LD) DIV 2] & D ; LMAX ← LN END ;
.	IF LD > LN THEN START N ← !SPACES[1 TO (LD-LN) DIV 2] & N ; LMAX ← LD END ;
.	"↑[{N}]&↓[{D}]&[{(!HYPHENS[1 TO LMAX])}]" ; TURN OFF ; ⊃

.MACRO SCRIPTS ⊂ TURN ON "↑↓[]&_∪" ⊃
.MACRO GREEKS ⊂ TURN ON "{∂\αβ#←→∞" ⊃
.MACRO FORMAT ⊂ SCRIPTS ; GREEKS ⊃
.MACRO FAC ⊂FILL ADJUST COMPACT CRSPACE GREEKS⊃

.COUNT SECTION
.MACRO SEC(NAME) ⊂ SECNAME←SSNAME←NULL
.BEGIN SKIP TO COLUMN 1; NEXT SECTION; TURN ON "{∂∞→#"
{!}.##NAME {SKIP
.SECNAME←"NAME"
.SEND CONTENTS ⊂ SKIP
{!}.##NAME\∞ ∞.∞ →#{PAGE!}
. ⊃
.END  ⊃

.COUNT SUBSECTION IN SECTION PRINTING "!.1"
.MACRO SS(NAME) ⊂ SSNAME←NULL
.BEGIN SKIP TO COLUMN 1; NEXT SUBSECTION; TURN ON "{∂∞→#"
∂8{SUBSECTION!}##NAME{SKIP 1
.SSNAME←"NAME"
.SEND CONTENTS ⊂
∂8{SUBSECTION!}##NAME\∞ ∞.∞ →#{PAGE!}
. ⊃
.END ⊃

.COUNT SUB2 IN SUBSECTION PRINTING "!.1"
.MACRO SSS(NAME) ⊂
.IF LINES<7 THEN SKIP TO COLUMN 1 ELSE SKIP 1; NEXT SUB2
.BEGIN TURN ON "{∂∞→#"
∂(16){SUB2!}##NAME{SKIP
.SEND CONTENTS ⊂
∂(16){SUB2!}##NAME\∞ ∞.∞ →#{PAGE!}
. ⊃
.END ⊃

.MACRO BACK ⊂
.PORTION CONTENTS
.COUNT PAGE PRINTING "i"
.FILL NOJUST FORMAT CRBREAK
.SECNAME←SSNAME←NULL
.INDENT 0,30,10 PREFACE 1 TABS 30,33,36,39,42,45,48,51,54,57,60,63
←TABLE OF CONTENTS
.SKIP 2
∂(16)Section→Page
.SECNAME ← "TABLE OF CONTENTS"
.RECEIVE
.		⊃

.MACRO BV ⊂ SKIP; BEGIN VERBATIM ⊃

.MACRO FIG(NAME) ⊂ PORTION NAME
.GROUP SKIP 20
←NAME GOES HERE
.	⊃
.CENTER CRBREAK GREEKS

Proposal to
.SKIP 2
THE ADVANCED RESEARCH PROJECTS AGENCY
.SKIP 2
for support of
.SKIP 3
THE STANFORD ARTIFICIAL INTELLIGENCE PROJECT
.SKIP 2
John McCarthy, Professor of Computer Science
Principal Investigator
.SKIP 3
THE HEURISTIC PROGRAMMING PROJECT
.SKIP 2
Edward Feigenbaum, Professor of Computer Science
Co-Principal Investigator
.SKIP
Joshua Lederberg, Professor of Genetics
Co-Principal Investigator
.SKIP 2 }and{ SKIP 1
THE NETWORK PROTOCOL DEVELOPMENT PROJECT
.SKIP 2
Vinton Cerf, Assistant Professor of Computer Science and
Electrical Engineering, Principal Investigator
.SKIP TO LINE 45
January 1973
.SKIP 2
STANFORD UNIVERSITY
ABSTRACT
.FAC; BEGIN DOUBLE SPACE; PREFACE 2
Research projects in artificial intelligence, heuristic programming,
and network protocol development are proposed.  These projects will
be administratively distinct within the Computer Science Department
and are described below in separate sections with separate budgets.
The total cost is
$3 million over a period of two years and two weeks (15 June 1973
through 30 June 1975).

←Artificial Intelligence

The Stanford Artificial Intelligence Project proposes to advance
existing research programs in robotics, representation theory,
mathematical theory of computation, and automatic deduction.
These activities will cost a total of $1.25 million per year.

A principal objective of the robotics work will be to lay
foundations in computer vision and manipulation that will
permit major advances in industrial automation.

The work on representation theory will enlarge the set of
problems that can be represented and, hopefully, solved by
computers.  This field is fundamental to language understanding,
learning from experience, and advanced problem solving.

Mathematical theory of computation is aimed at formally proving
the correctness of programs.  This will provide a much more
powerful basis for certifying programs than our current imperfect
"debugging" techniques.

The automatic deduction work will extend heuristic programming
techniques for theorem proving to a wider range of problems
than can now be handled.

←Heuristic Programming

The Heuristic Programming Project is planned to continue at a level
of $200,000 per year.

←Network Protocol Development

The Network Protocol Development Project has two parts costing a total
of about $50,000 per year.

The first part
is to provide for the orderly design, development, implementation,
and review of protocols for the ARPA network.  A key element in this
task is to supply theoretical bases for the design of protocols and
to make predictions of performance to be verified by observation and
experiment (e.g. through the Network Measurement Center).

The second part involves the coordination of an effort to specify
guidelines for subnet and Host-Host protocol design so that it will
be technically feasible for national networks to be interconnected.
The essential problem here is  to  isolate  and  raise  the  critical
issues  which  must  be resolved by the protocols, and to effectively
stimulate and lead the working group's efforts.  The cost of these
activities will be about $50,000 per year.
.END
.INSERT CONTENTS
.PORTION MAIN
.PAGE←NULL
.EVERY HEADING({SECNAME},,{SSNAME})
.EVERY FOOTING(,{PAGE!},)
.SEC ARTIFICIAL INTELLIGENCE PROJECT

Artificial  Intelligence is the experimental and theoretical study of
perceptual and intellectual processes using computers.  Its  ultimate
goal  is to understand these processes well enough to make a computer
perceive, understand and act in ways now only  possible  for  humans.

	The Stanford Artificial Intelligence Project will soon have
been in existence ten years, although the present location and the
computer facilities are a few years newer.  During this time, its work
has been basic and applied research in artificial intelligence and closely
related fields, including computer
vision, problem solving, control of an artificial arm and a vehicle.
This work has been fully described in
our Artificial Intelligence Memoranda and in published
papers.  The bibliography of this proposal gives references [Section 1.6].
Here is a short list of what we consider to have been our main accomplishments.

Robotics
.NARROW 4,4

 Development of vision programs for finding, identifying and describing
various kinds of objects in three dimensional scenes.  The scenes include objects with flat
faces and also curved objects.

 The development of programs for manipulation and assembly of
objects from parts.  The latest result is the complete assembly of an automobile
water pump from parts in known locations.

.WIDEN
Speech Recognition

.NARROW 4,4
 Development of a system for recognition of continuous speech, later transferred
to Carnegie-Mellon University and now being expanded upon elsewhere.
  Additional research in speech recognition
has been done here and will continue if support can be found.

.WIDEN
Heuristic Programming

.NARROW 4,4
Our support of Hearn's work on symbolic computation led to the development
of REDUCE, now being extended at the University of Utah and widely used
elsewhere.

 Work in heuristic programming resulting in Luckham's resolution
theorem prover.  This is currently about the best theorem prover in existence, and it
puts us in a position to test the limitations of current ideas about
heuristics so we can go beyond them.

.WIDEN
Representation Theory

.NARROW 4,4
 Work in the theory of how to represent information in a computer
is fundamental for heuristic programming, for language understanding
by computers and for programs that can learn from experience.  Stanford
has been the leader in this field.

.WIDEN
Mathematical Theory of Computation

.NARROW 4,4
 Our work in mathematical theory of computation is aimed at
replacing debugging by computer checking of proofs that programs
meet their specifications.  McCarthy, Milner, Manna, Floyd, Igarashi,
 and Luckham
have been among the leaders in developing the relevant mathematical
theory, and the laboratory has developed the first actual proof-checking
programs that can check proofs that actual programs meet their specifications.
In particular, Robin Milner's LCF is a practical proof checker for a revised version
of Dana Scott's logic of computable functions.

.WIDEN
Research Facilities

.NARROW 4,4
Development of a laboratory with very good computer and program facilities
and special instrumentation for the above areas.  At present, we have the
largest general purpose display-oriented timesharing system in the world.
Unfortunately, it also appears to be the most heavily loaded timesharing
system in the world.

 The development of a mechanical arm well suited to manipulation research.
It is being copied and used by other laboratories.

 In the course of developing our facilities, we have improved
LISP, developed an extended Algol compiler called SAIL, written
editors, loaders, and other utility programs for the PDP-10 and made
numerous improvements to time-sharing systems.  Many of these programs,
particularly LISP and SAIL,
are used directly in dozens of other computer centers.

We have designed an advanced central processor that is about 10 times as
fast as our PDP-10.  We plan to construct this processor and make it
operational within the early part of this proposal period.

.WIDEN

	In the next period, we propose to continue our work in these areas
with the following expected results:

.NARROW 4,4
The artificial intelligence problem still won't be conclusively
solved.  As a scientific subject, the study of the mechanisms of
intelligence is as difficult as physics, chemistry, or biology, and decisive
results will be slow in coming and require genius.

Our identification of intellectual mechanisms and our ability
to make machines carry them out will advance.  The precise results cannot
be predicted.

.WIDEN
	In short term problems and in applications, we can make more
definite predictions:

.NARROW 4,4
We will lay the groundwork for industrial implementation of automatic
assembly and inspection stations, employing computer vision and general
purpose manipulators.

We will be able to represent in the computer the full
information required for computer solution of problems such as
the travel problem given in Section 1.1, below.

We will be able to check proofs of the correctness of actual
though simple compilers into PDP-10 machine language.
.WIDEN
.GROUP
The proposed structure of the Project
is given in Figure 1.

.BV

          Figure 1.  Structure of the Stanford Artificial 
                        Intelligence Project

               Principal Investigator:  John McCarthy
                                  |
            ______________________|______________________
           |             |                 |             |
           |             |                 |             |
    _______|_______      |                 |      _______|_______
   |   Computer    |     |                 |     | Mathematical  |
   |   Vision &    |     |                 |     |  Theory of    |
   | Manipulation  |     |                 |     | Computation   |
   |_______________|     |                 |     |_______________|
    Feldman,Binford,     |                 |      McCarthy,Weyhrauch,
      Paul, Quam         |                 |           Luckham
                         |                 |
                         |                 |
                  _______|_______   _______|_______
                 |               | |               |
                 |Representation | |   Automatic   |
                 |    Theory     | |   Deduction   |
                 |_______________| |_______________|
                     McCarthy       Luckham,Green,Allen
.END APART
.SS ROBOTICS

Current research staff:  Jerome Feldman, Tom Binford, Lou Paul, Lynn
Quam, Bruce Anderson, Craig Cook, Randy Davis, Kicha Ganapathy, Gunnar
Grape, Ram Nevatia, Richard Orban, Karl Pingle, Vic Scheinman, Yoram
Yakimovsky.

During the past several years our  vision  effort  has  shifted  from
simple scenes of polyhedra to complex objects in richer context.  Not
all problems with towers of polyhedra have been solved, but there are
a  host  of  reasons  for moving on.  Manipulation has also proceeded
from manipulation of blocks to assembly operations for  many  of  the
same  reasons.     Minsky and Papert [Minsky 1972] have described the
central  issues  and  the  intellectual  position  of   robotics   in
artificial intelligence. We refer to that discussion as background to
our motivation to direct research to real world scenes  and  objects.
An  important  motivation is to develop applications for manipulation
and visual feedback capabilities which have been developed here.  The
time  is  ready  for  the  first transfer of robotics technology from
laboratory  to  prototype  assembly  stations.  We  discuss  robotics
applications more fully in another section.

Another  major  motivation is to face new issues; systems have come a
few  steps  toward  heterarchical  systems  from  heirarchy;  from  a
succession  of homogeneous passes with compounding errors to a system
structure where high and low levels interact closely.  One aspect  of
such a system is the use of many functionally independent techniques;
this has meant including  experimentation  with  color  and  texture,
depth  information, motion parallax and image prediction.  The choice
of problems as paradigms is central to the way  in  which  perceptual
systems  evolve.      A  great deal can be done with simple scenes of
polyhedra using a single image  in  one  color,  ignoring  all  other
perceptual abilities.  Since most perceptual problems can be resolved
within  a  single  image,  one   approach   is   to   exhaust   those
possibilities.      Important  areas  are missed; for example, visual
feedback is clumsy without depth information.   For our plans, visual
feedback for manipulation is primary.

There  are  other  ways  in  which  the  tower  of blocks paradigm is
inadequate  for  our  purposes.      Low  level  feature  description
operators   are   tailored  for  this  case.    Adequate  region  and
edge-oriented descriptors must be evolved.     With  polyhedra,  only
simple  preliminary  grouping  mechanisms have been programmed.  More
intermediate level descriptors must be implemented.

Although real world scenes are much more  complicated,  they  may  be
simpler  to  perceive in that they have a much richer context.  There
is only very general knowledge available in the  usual  blocks  task;
usually  color,  size,  and  support are unspecified.  In an assembly
task, sizes are known, color is  used  to  code,  and  parts  fit  in
special  ways.     Often  recognition can be based on obvious special
features.  These tasks are good exercise in developing  goal-oriented
visual  systems.  Although development and utilization of intelligent
robots will progress gradually, the information sciences will  impact
industry  enormously  over  a shorter term. An immediate effect is to
streamline information flow in the manufacturing  process.     Simple
systems  can keep track of delayed effects of design decisions, lower
cost of preparation and  transmission,  and  increase  accessibility.
Computer    controlled    scheduling,    resource   allocation,   and
transportation routing are important to most of industrial production
[Anderson  1972].   We see a major contribution from techniques being
pioneered in artificial  intelligence  laboratories,  and  that  such
research  will  have  payoff  throughout  the  entire  development of
intelligent systems. Enormous efficiency  increases  will  come  from
analysis  of  assembly  and  inspection  in  terms of some "primitive
operations".   In the past, computer cost and lack of  sophistication
has  limited  the  ability  for  such  analysis, and the slow pace of
automation has not pressed  hard  for  analysis.    Now  it  will  be
possible   to   find   common  fabrication  and  assembly  operations
throughout a large company, to discover  commonality  in  design  and
components.  The success of that analysis depends upon representation
of fabrication operations and shape.   Representation  is  a  current
concern  in  robotics  labs  and  may  have  early applications. This
orientation to the manufacturing system is already beginning to  have
effect  with  the  "group  technology"  approach,  which  attempts to
describe parts and operations, and group them together in one part of
the   factory.     Until  the  establishment  of  such  analysis  and
commonality  of  operations,  automating  will  be  made  much   more
difficult.

Considerations are similar for programming automation devices such as
the assembly station we describe.  Even if programming is  difficult,
there  will  be  considerable  applications  for  dexterous  assembly
systems. For example, for the military, it is preferable to stockpile
programs than to stockpile parts or systems for contingency. In batch
production, tapes can be saved to cut stockpiling.   But  to  achieve
full  effect  on  industry, robot systems must be easy to program for
new tasks.  Our current work is concerned  with  strategy  generation
and  automating  visual  feedback.    Time-sharing  systems  will  be
necessary  to  perform  scheduling,  resource   allocation,   routing
operations,  etc.   Thus there is a natural division of labor between
the time-sharing system and small machines. The strategy  generation,
automating   feedback,  and  interactive  system  for  computer-aided
mechanical design expect to reside in the central system.

A major part of our effort is directed toward the  representation  of
complex  shapes,  for  use  in vision and strategies.  While it might
seem that this effort has only distant applications, for the  reasons
discussed   above,   early   applications   will  be  for  convenient
man/machine   interaction   for   programming    robots    and    for
systematization.   We can expect that in general, much of the work in
symbolic representation of the real world  will  have  early  payoff.
Present  non-intelligent  robots  are programmed by specifying points
along a path  to  be  repeated.    With  computer  control,  optional
trajectories  can  be  executed in case of contingency.  The engineer
must then specify what normal function is, how to monitor it, and how
to  recognize recoverable situations.   To do so, he can specify this
symbolically in interaction with a  program  in  which  he  specifies
normal  and abnormal conditions, shape of parts (usually some variant
of standard parts), the mating of parts for assembly,  and  schematic
descriptions   of  operations  for  insertion  and  inspection.   The
significance of representations in general is to aid these processes.
Our representation provides a natural way of speaking about the shape
of objects, in terms of a structured description of  part/whole  with
primitives  which  resemble many fabrication operations.   The better
the representation, the more it provides intuition for programs as to
the intent of operations and the function of parts.

A  representation  aids communication in both directions.  We contend
that our representation is important for  computer  graphics.    This
allows  display  of  the  machine's  concepts to the user, and allows
greater use of predictive mechanisms based on internal models.  Input
could  be made visually with a depth measuring system such as we have
been using for the last two years.  Since the operation would be done
on  a central facility, the computer would be adequate for this task.
We expect that the representation will  be  of  considerable  use  in
monocular  vision  also.   We hope to substantiate these predictions.
Another application is  automating  design  of  numerically  machined
parts.    That  will be simplified to the extent that the machine and
the user speak the same language of subparts and their  descriptions.
Benefits  would  come  from  ease  of  programming  and  interaction,
variable  tolerances  for  critical  parts  and   better   monitoring
operations.

.SSS THE PILOT ASSEMBLY STATION

We  seek  other  funds  to  partially support application of robotics
techniques to industrial assembly and inspection.   We will implement
over  the  next  few years a prototype assembly station and a support
system for programming the assembly station for its tasks.

The assembly station is assumed to be normally connected to  a  large
time-shared computer facility. Not only does this appear to be a good
design for our problem, but it is also necessary  for  an  integrated
programmed automation facility.  The only emphasized statement in the
ARPA study report [Anderson  1972]  is:     "The  maximum  impact  of
computers   in  manufacturing  will  come  from  complete,  real-time
computer cognizance and  control  of  all  processes  and  resources,
allowing the precise scheduling and allocation of these resources."

Each  assembly  station will consist of a small digital computer, one
or more manipulators and cameras. Manipulators and  cameras  will  be
combined  as  modules  in  systems  to suit a given task, and will be
digitally controlled by the station computer.

The program in the station computer will be flexible as to the number
and  position of manipulators and cameras. It will control input from
the cameras, provide for some pre-processing of the picture  and  the
application  of  simple  tests to the visual data.     In the case of
manipulators it will provide for simultaneous servo control. It  will
also  process  other  inputs,  such  as  from  touch  sensors  on the
manipulators, from photocell interrupt devices, limit switches,  etc.
Both  analog  and binary outputs will be available for the control of
other devices.

Each station computer will have access to a task  program  specifying
the actions and tests it should perform to carry out its task.   This
task program will be compiled by the central computer.    Compilation
will  consist of the analysis of predicted scenes in order to specify
simple tests that the station computer can carry out, the calculation
of  collision free trajectories for the manipulators, the sequence of
operations and simple contingency plans. etc.  We  are  planning  for
tasks   whose  normal  execution  does  not  require  detailed  scene
analysis.  If scene analysis becomes necessary (e.g. a dropped  part)
we  envision the station sending an image to the central computer for
processing.

The central computer will perform overall control of  the  individual
assembly  stations  and the relative rates of production.   If at any
station a planned test fails, for which a contingency plan  does  not
exist,  or  any  unexpected  sensor signals are received, the central
computer will also be interrupted and appropriate actions taken.

Most of the hardware required  for  the  assembly  station  has  been
developed  for  the  laboratory.    Manipulators  and cameras will be
discussed in detail below. The major additional problem is to  refine
the   existing   designs   to  meet  the  demands  of  an  industrial
environment.

The most difficult part of the assembly station, and the heart of the
of the proposal is a demonstration of the feasibility of the station.
We plan to do this by developing programs which actually assemble and
inspect a variety of fairly complex manufactured items.

A  typical  example is our current project, assembly of an automobile
water pump. The major  parts  are  to  be  located,  the  screws  are
supplied by a feeder, as is standard.  Locating the body and cover of
the pump require a gross localization, then a  fine  localization  of
screw  holes.      Touch  provides  a  final accuracy and independent
coordination of the hand.   The  placing  of  alignment  pins  within
screw  holes  could  be monitored visually; although in this case,the
previous actions provide adequate accuracy.   Placing  of  the  cover
over  alignment  pins  can  be  carried out without vision.  Improved
force  sensing  now  under  development  will  be  useful  for   that
operation.

As  we move to more difficult tasks, the sophistication required goes
up quite rapidly.  The  current  manipulation  routines  seem  to  be
expandable  to cover anticipated problems, especially since we intend
to employ special purpose manipulators (tools) for some tasks.    The
main part of our effort will be vision for assembly; an assembly task
will be performed using vision, touch, and two hands.     The  vision
effort  will  be  integrated with other abilities, and while no great
generality will be stressed, new modules will  be  incorporated  into
the  system,  and a set of techniques will evolve.  These will depend
heavily on techniques similar to  those  of  our  recent  studies  in
visual  feedback  and hand/eye coordination [Gill 1972]. The internal
model of the world will be  used  to  predict  visual  appearance  of
selected  areas  of  the  scene, and a set of verification procedures
will confirm or contradict the predictions.  Many of  the  operations
will  be  connected  with  supplying  feedback  to  manipulation, for
alignment and insertion. These visual procedures will have models for
the parts and approximate locations.

This and other visual tasks connected with assembly will initially be
performed as special case problems.    In  addition,  to  make  these
techniques   available,   we  will  pursue  development  of  strategy
programs.  This step is essential if intelligent assembly devices are
to  be conveniently programmed for complex tasks. Within the division
of labor between large and small machines, this facility must  reside
in  the  central  time-sharing  facility which has large memory, file
storage, and problem-solving languages. The system must have a  model
of  the  parts  for  assembly,and  a  sequence  of  instructions  for
assembly. It will be necessary to have a representation of shape  and
models for usual assembly operations such as insertion and alignment,
and the ability to solve simple problems  in  connection  with  these
operations,  all  this  to  enable  convenient communication with the
user. When these facilities exist, the addition of new  features  and
operations  can be incorporated smoothly, so that users can call on a
growing library of planning modules.  The result of a planning effort
is  a  cycle  of  machine  planning and man-machine interaction which
results in a plan  network  for  the  mini  computer  which  controls
individual machines.

.SSS RESEARCH IN VISION

Our  past work on visual feedback [Gill 1972] in stacking objects and
inserting objects  in  holes  is  directly  relevant  to  vision  for
assembly.   The  system provides feedback for picking up, then at the
final position, with the hand still holding the block  and  in  view,
predicts  the  appearance  of edges and corners within a small window
around the faces to be aligned, and  uses  a  corner/edge  finder  to
determine  alignment  error. Incremental motion capability of the arm
is  used  for  error  correction.          That   work   incorporates
self-calibration  to  coordinate  hand and eye coordinate systems and
maintain reliability (by maintaining  calibration),  as  well  as  to
limit   search   by  accurately  predicting  expected  features.  Two
well-developed systems exist to aid future work in  visual  feedback.
GEOMED  [Baumgart 1972a] is a geometric editor which allows the input
of  geometric  models  and  manipulation  of  these  models.   OCCULT
[Baumgart  1972b]  is a hidden line elimination program which gives a
symbolic output, valuable for vision, in a few seconds.

We have a set of visual  operations  which  provide  confirmation  of
predictions  and  direct  interaction  with the visual image.     For
outdoor scenes, color is an important basis for region  forming,  and
for  preliminary scene organization. Several preliminary color region
analysis programs have been developed [Bajcsy
1972].    These will be extended as described below. An edge operator
[Hueckel 1969,1972] and a region-based corner/edge finder [Gill 1972]
both  operate  on small windows of the scene, as directed by a higher
level.     An edge verifier  is  used  to  confirm  or  reject  edges
predicted  from  context [Tenenbaum 1970].   Color identification has
long been demonstrated.   Recently, a program  has  been  implemented
based  on  Land's Retinex model of color constancy in vision.    This
involves taking intensity measurements with the camera  through  red,
green,  and  blue filter, and setting up a calibration scheme for the
Goethe color triangle in the absence of any external  color  standard
(white).   This is achieved by normalizing the intensities of regions
within the scene through each filter, and defining a  color  triangle
derived  from  this  ordering  data,  with the center of the triangle
designated as white. The regions may then be assigne color  names  in
the  normal  way.   The program as it stands can cope reasonably well
with table top scenes under varied lighting  conditions  -  sunlight,
quartz-halogen  and flourescent, and therefore shows the same kind of
color constancy as do humans.

We have directed part of our work  to  analysis  of  outdoor  scenes.
Outdoor  scenes were analyzed with a combination of color and texture
region analysis  [Bajcsy  1972];  depth  cues  were  taken  from  the
gradient of texture.  A symbolic representation of texture and of the
world were adopted; descriptors of texture elements were obtained  as
analytic  expressions  from  the  Fourier  spectrum.   A considerable
improvement could be made by combining these techniques with  several
techniques which obtain depth information from correlation of images.
Motion parallax has been used with  a  succession  of
images.   The  successive  images  differ little from one another and
allow local  correlation  without  ambiguity.     In  a  sense,  this
combines the advantages of narrow and wide angle stereo, since stereo
ambiguity is avoided, while large baselines  can  be  accumulated  by
tracking  regions  over  several  images. Another system incorporates
several facilities in wide angle stereo [Quam 1972]; local statistics
are  used  to  limit  search for correspondence of small areas in two
images.    An interesting feature is  self-orientation:    given  two
cameras whose orientation is unknow, the system uses guided search to
establish  a  few  correspondences,  then  determines  the   relative
orientation   of   the   two   cameras.   Such  self-orientation  and
self-calibration  facilities  have  considerable   significance   for
industrial  systems  which can be maintained calibrated.    These are
complementary depth sensing techniques; motion parallax can  be  used
for  planning  paths and characterizing large scale terrain features.
Two camera stereo can be used for  local  manipulation  and  disaster
avoidance.   Visual  feedback  with stereo vision is potentially more
powerful than without.    We have a program  which  has  demonstrated
stereo visual feedback.

A  thesis  by  Agin  [Agin  1972]  showed  considerable  progress  in
representation and description of complex objects.  He implemented  a
laser  ranging  device [Binford 1970] which was operational two years
ago.    Using depth data and a representation of shape [Binford 1971]
which  we  have developed, we have been able to describe the parts of
objects  like  a  doll,  snake,  and  glove.        The
representation is based on segmenting objects into parts connected at
joints.    Parts  are  described  by  a  volume  representation  with
arbitrary  cross  section defined along a space curve.  The data were
analyzed by techniques suggested by the representation.   Preliminary
segmentation  was made into parts suggested by some primitives of the
representation, smoothly varying,  locally  cylindrical  parts.    An
iterative  procedure  defined  an  approximation  to  the axis of the
segment, fit  circular  cross  sections  normal  to  the  axis,  then
obtained  an  estimate  of  a global cross section function which was
then used to extend the axis and cross section as far as consistent.

A study of visual and depth ranging techniques has been  carried  out
which  provides  a background for choosing sensors and interface, and
making cost estimates [Earnest 1967, Binford 1972].    An  interesting
result  is  that  the  cost of a laser ranging system such as ours is
less than $1k given a TV system. We have maintained a calibrated hand
and  vision system for several years [Sobel 1970], and have developed
self-calibration  techniques  with  one  and  two  eyes   [Quam 1972,
Gill 1972].

A  system  for  understanding  scenes of blocks from single views has
been implemented [Falk 1970]; it segments  objects,  matches  against
internal  models,  predicts  appearance  from  its  assumptions,  and
searches for verification and contradiction based  on  the  predicted
image.    Support and spatial relations are also obtained.  There has
been past work on programs  to  obtain  line  drawings  as  input  to
recognition  systems.  An earlier system was based on limited context
of expecting continuation of edges at vertices and predicting missing
edges  in certain expected configurations [Grape 1970]. A more recent
approach has been to use models within the process of generating line
drawings,  so  that  the  most  reasonable next guess can be made and
context used intrinsically in the formation of line drawings.

.IF LINES<7 THEN NEXT PAGE
←VISUAL DEVELOPMENT FOR ASSEMBLY

In a later section, we describe assembly of a pump carried  out  with
the  manipulator  using  touch,  without vision.  We are implementing
vision programs to aid the assembly. In  the  pump  assembly,  visual
tasks  consist  of  gross  location  of  large  parts, then using the
information to predict location of images of holes, and monitoring of
insertion  into holes.   These tasks are typical of the simple visual
operations which we will implement or adapt  for  assembly.  We  will
greatly  expand our efforts with visual feedback.  At present, visual
feedback techniques are limited to polyhera.  We will  improve  image
prediction techniques to predict images of objects with curved edges.
We will develop  a  simple  region  analysis  program  which  assumes
adequate contrast to locate the curved edges predicted within a small
window.   Color will be used when necessary.      We  will  implement
two-dimensional  matching  of  boundary curves with predicted curves.
These  will  be  combined  with  our  existing  edge   operator   and
corner/line  operator.     We  will  maintain a calibrated system and
improve calibration updating so that reliability is maintained  at  a
high level.

.IF LINES<7 THEN NEXT PAGE
←PROPOSED RESEARCH

We  will improve the laser hardware so that it can be used widely and
extend our work with representation of complex  objects  using  depth
data.     We  are able to make complete descriptions of objects which
consist of a single primitive segment, such as a torus, a snake or  a
cone.     The  programs can segment dolls, toy horses, in intuitively
natural ways [Agin 1972]. It is necessary to improve our segmentation
process  in  order to proceed to joining primitive segments at joints
and make complete descriptions of complex objects.     When  we  have
these  descriptions,  the computer will have symbolic internal models
of objects.   No recognition has yet been done, although we are in  a
position  to  recognize  those  objects which have only one primitive
segment, for which we can make complete descriptions. For recognition
of  more complex objects, once we have complete descriptions, we will
encode the basic properties of the  representation  within  a  visual
memory   scheme   which   allows   access   of  hypothesized  similar
three-dimensional shapes, with  associated  surface  information,  to
compare more closely with extensive verification.  We intend to apply
large parts of our programs for description and recognition of curved
objects  using  only  monocular intensity information. This is a much
harder problem, but within our representation,  shares  much  of  the
same  code,  and  relies  on  the  same  conceptual  basis  of  local
three-dimensional models.


We  will  continue  work  on  color region analysis which is based on
proximity rather than contiguity.  This  was  the  plan  outlined  in
previous proposals, and has shown necessary by our work with textured
scenes [Bajcsy 1972].   Another effort combines region analysis  with
a  decision  theoretic approach based on a world model. We anticipate
that these modules will aid the integrated system to function in more
complex  environments than have been attempted in the past.  There is
a growing  effort  on  navigation  of  vehicles  in  outdoor  scenes.
While this is not directly relevant to the industrial environment, it
does reflect the growing involvement in  complex  visual  tasks,  and
will lead to further development of an integrated visual system.

A  strategy  project  is  underway  dealing with the world of blocks.
The project begins from the idea that robot vision cannot be  studied
except  as a part of intelligent behavior in general.  Visual ability
is FOR something,  and  without  reasonable  expectations  about  the
world,  the  task of seeing is bound to be extremely difficult.    We
plan to implement a system which can  do  simple  manipulation  tasks
("pick  up  the  red  block",  "put  all the cubes in the box") in an
environment of plane-faced objects, and operating in a dialogue mode.
The  system  will have a good but imperfect world model all the time,
and the essence of its ability will be the constant use of this model
to  understand  situations  and  in particular to recover from errors
("Is that block no longer there because the arm  knocked  it  off?").
The  main  point  of  the  project is to find a good organization for
knowledge about the simple world in question:  we  expect  that  when
this  is  done  the  complex  subroutines  of current vision programs
(linefinders etc) will be found to be unnecessarily complex  in  that
they embody in an inefficient way at a low level, knowledge which our
system will contain at a higher level. In part, this will be done  by
expanding  the facilities available.  A stereo vision module is being
programmed, using line  drawings  and  integrating  stereo  with  the
segmentation and recognition processes.

We  are  at  work  on  a  program which exploits new features of SAIL
[Feldman  1972]  for  dealing   with   multi-process   tasks   in   a
problem-solving environment. The aim is to construct a driver program
to  accomplish  a  simple  assembly  task  under  somewhat  uncertain
conditions.  Making use of the new language features, one can arrange
for such a program to cope with errors in the execution  of  tasks  -
e.g. to recover from dropping things, breaking things, and further to
try several different but possibly cooperative methods for solving  a
problem.   It  is intended that this program will have its own vision
procedures  specially  adapted  for  finding  tools,  e.g.  wrenches,
screwdrivers, etc., and parts, e.g. nuts and bolts. An interpreter is
being developed to allow flexible description  of  the  task  by  the
user.

.SSS MANIPULATOR DEVELOPMENT

Manipulator   research  began  at  Stanford  Artificial  Intelligence
Laboratory  in  1965  with  the  acquisition  and  programming  of  a
prototype  Rancho Orthotic Arm.  A later version of this arm was used
in vision directed block stacking programs.

Our present arm, the result of a continuing design project,  replaced
the  Rancho Arm in 1970. This arm like its predecessor was controlled
by a digital servo program running under the time  sharing  computer.
The new arm was used in the "Instant Insanity" demonstration in 1971.
A second arm of the same type,  with  only  minor  modifications,  is
currently  being interfaced to the computer.   Both arms will share a
common work space and will  be  able  to  be  used  individually,  or
together for two handed tasks.

Manipulator  research  moves  ahead  simultaneously  on three fronts:
control, primitives and devices.      Tasks are  translated  into  an
existing  hand  language and where the task cannot be accomplished in
terms of existing primitives, new  primitives  are  formulated  in  a
consistent  manner.     When a new primative is formulated, real time
routines are developed to perform the action.  This  also  leads,  in
many  cases,  to  the  development of new hardware.       Also as new
developments occur in hardware design, existing  real  time  routines
are either re-written or modified.

Research is now concentrating on the following areas:

Strain gauge instrumentation of the wrist, in order to allow for more
sensitive interaction of the hand with a  task.     High  resolution,
high  sensitivity  touch sensors, to increase tactile recognition and
control abilities.     These developments will lead in  turn  to  new
control concepts.

Development  of  tools  for the hand. These tools can be instrumented
such that the computer can feel and or see at the end  of  the  tool.
This  should  be  a very important area of research as it goes beyond
present human capabilities.

Two handed arm control, the planning and execution of tasks requiring
the  coordinated  motion of both arms. This will require an extension
of the existing servo program and  the  analysis  of  synchronization
problems.   The  development of collision avoiders, so that arms will
not collide with other objects and with each other.

Modification of  the  existing  arm  design  to  provide  for  better
incremental motions, more degrees of freedom and a more rugged design
suitable for industrial use.   Development of arm  and  eye  computer
interfaces.     These  contain synchronized analog/digital converters
for input and current controlled drivers for output.

The development of new hands is expected as more  complex  tasks  are
attempted.    At present, with only one hand, we have been performing
the positioning type of hand  actions.     When  the  second  arm  is
operational we will be able to perform holding and pulling actions as
well.

←DEMONSTRATIONS OF MANIPULATION

Operations controlled by force have been demonstrated  by  turning  a
crank  and  screwing  a  nut  onto  a  bolt.  The  hand  language and
manipulation facilities are shown in the assembly of  a  small  pump.
The  positions of the parts are known approximately; they are located
on a pallet as expected in industrial practice.  By touch,  the  hand
locates  position  and  orientation of the pump body more accurately.
Aligning pins are inserted into the holes to guide the gasket and the
pump  cover.  Screws are obtained from a feeder, as is standard.  All
operations have  been  performed  in  the  assembly,  but  the  total
reliability  is  still  sufficiently  low  that  one error or another
occurs.  Vision has not been used in the assembly.

.SSS SYSTEM REQUIREMENTS FOR ADVANCED AUTOMATION

A key element in the development of any large programming  effort  is
the  software  support.   There has been continual effort along these
lines at the Stanford Artificial  Intelligence  Laboratory  and  many
useful  ideas have been developed.  We propose to extend the research
along two separate lines:  support for research and  system  software
for  operating  automated systems.  We will briefly describe our past
efforts in this area and what we propose to undertake.

The Stanford Artificial  Intelligence  Laboratory  has  pioneered  in
several  areas  of  system  programming,  including  display-oriented
timesharing systems, interactive graphics, and programming languages.

A great deal of additional effort has gone  into  direct  support  of
hand-eye  research.     Very  early, we established a set of standard
formats and library routines [Pingle 1972a].  A  programming  language,
SAIL, was implemented in 1968.  The language incorporates special
features  for  large real-world-oriented problem solving systems
and is currently in use at a number of  Artificial  Intelligence  and
other laboratories.

As  we began to assemble larger and more sophisticated collections of
programs, we felt a need  for  flexible  mechanisms  for  cooperating
sequential  processes.  The resulting monitor and language features
[Feldman & Sproull 1971]  have  worked  well  for  us  and  have  been
influential  on  other  projects.      Recently,  we  have  added  an
additional set of features  to  SAIL  [Feldman  1972]  to  facilitate
non-deterministic   and   event-directed  programming.     The  early
reactions to this work, both within the laboratory  and  without  has
been very encouraging.


A major share of the programming research effort will be concentrated
on developing a programming system for automated assembly.  This will
combine ideas  from  our  past  work  on  languages,  representation,
computer  graphics and arm control.  We view this task as a branch of
the larger effort on automatic programming being undertaken  in  this
laboratory aad elsewhere.
.SSS REFERENCES
.BEGIN INDENT 0,8
[Anderson 1972] R. Anderson, "Programmable Automation, The Future
of Computers in Manufacturing", Datamation, Dec 1972.

[Agin 1972] G. Agin, "Description and Representation of  Curved  Objects",
Ph.D.  Thesis, Computer Science Dept., Stanford University, September
1972.

[Bajcsy 1972] "Computer Identification of Visual Texture" Computer Science
Department, Stanford University, (1972).

[Baumgart 1972a] B. Baumgart, "GEOMED - A Geometric Editor", May 1972.
SAILON-68,Artificial Intelligence Lab, Stanford University,

[Baumgart 1972b] Baumgart, Bruce G., Winged Edge Polyhedron Representation, 
AIM 179,Artificial Intelligence Lab, Stanford University,October 1972.

[Binford 1970] T.O.Binford, "Ranging by Laser", Internal Report,
Artificial Intelligence Lab, Stanford University, Oct 1970.

[Binford 1971] "Visual  Perception  by  Computer",  Invited  paper  at  IEEE Systems
Science and Cybernetics, Miami, Dec 1971.

[Binford 1972]"Sensor Systems for  Manipulation",  Conference  on  Remotely  Manned
Systems, JPL, Pasadena, Calif, Sept 1972.

[Earnest 1967] L.D.Earnest,"Choosing an eye for a Computer";
AI Memo 51, Artificial Intelligence Lab, Stanford University, 1967.

[Falk 1971] G. Falk, "Scene Analysis Based on Imperfect Edge Data", Proc. 2IJCAI,
Brit. Comp. Soc., Sept. 1971.

[Feldman & Sproull 1971] J.  Feldman  and  R.  Sproull,  "System  Support for the Stanford
Hand-eye System", Proc. 2IJCAI, Brit. Comp. Soc., Sept. 1971.

[Feldman 1972] J. Feldman, et al,"Recent Developments in SAIL-An Algol Based Language for Artificial 
Intelligence", (with others).  CS 308, Stanford University, August 1972.
To appear in Proc. Fall Joint Computer Conference, 1972.

[Gill 1972] A, Gill, Visual Feedback and Related Problems in Computer
Controlled Hand-eye Coordination, AIM-178, Stanford AI Laboratory, October 1972.

[Grape 1969] G. Grape, "Computer Vision through Sequential Abstraction",
Artificial Intelligence Laboratory, Internal Report, 1969.

[Grape 1970] G. Grape, "On predicting and Verifying Missing Elements in Line-Drawings"
Artificial Intelligence Laboratory, Internal Report, March 1970.

[Hueckel 1969] M.H. Hueckel, "An Operator Which Locates Edges in Digitized
Pictures",  Stanford  Artificial  Intelligence  Laboratory   AIM-105,
December 1969, JACM, Vol. 18, No. 1, January 1971.

[Minsky 1972] M. Minsky, "Artificial Intelligence Progress Report", AI Memo 252,
MIT AI Lab, 1972;

[Paul 1971]  R. Paul,  "Trajectory  Control  of  a  Computer  Arm",  2nd
International Joint Conference on Artificial Intelligence, 1971.

[Paul 1972] R. Paul, "Modelling, Trajectory Calculation and Servoing of  a
Computer  Controlled  Arm",  Ph.D.   Thesis,  Computer Science Dept.,
Stanford University, Sept. 1972.

[Pingle   1972a]   Karl   K.  Pingle,  "Hand/Eye  Library",  Stanford
Artificial Intelligence  Laboratory  Operating  Note  35.1,  January,
1972.

[Pingle  1972b] K.K. Pingle and J.M. Tenenbaum, "An Accomodating Edge
Follower", IJCAI-2. Proceedings of the 2nd.  International  Symposium
on Industrial Robots, May 1972, Chicago, Illinois.

[Quam 1972] L. H. Quam, S. Liebes, Jr., R. B. Tucker, M.  J.  Hannah,
B.  G.  Eross,  "Computer  Interactive  Picture Processing", Stanford
Artificial Intelligence Laboratory AIM-166, April 1972.

[Scheinman 1969] V.  D.  Scheinman,  Design  of  a  Computer  Manipulator,
Stanford Artificial Intelligence Project, Memo AIM-92, June 1969.

[Sobel 1970] Irwin Sobel, "Camera Models and Machine Perception", Stanford
Artificial Intelligence Project Memo AIM-121, May, 1970.

[Swinehart 1969] D. Swinehart, R.  Sproull,  "SAIL",  Stanford  Artificial
Intelligence Laboratory Operating Note 57, November 1969.

[Tenenbaum 1970] J. M. Tenenbaum, "Accommodation in Computer Vision",
PhD thesis in Electrical Engineering, September 1970.
.END
.SS  MATHEMATICAL THEORY OF COMPUTATION

Current research staff:  John McCarthy, David Luckham, Robin Milner,
Richard Weyhrauch, Malcolm Newey, Whit Diffie.

.SSS RECENT ACCOMPLISHMENTS
Recent accomplishments in mathematical theory of computation have
been as follows.

	1. Professor John McCarthy has developed a set of axioms for
the Scott theory in set theory in first order logic.  These axioms
are in the form acceptable to Diffie's first-order-logic proof checker.

	2.  Dr. Jean-Marie Cadiou spent the summer further researching
the difference between the "call by value" and "call by name" methods
of evaluating recursive functions.  This represents further work
in the direction of his thesis [2].

	3.  Dr. David Luckham, Dr. Shigeru Igarashi, and Dr. Ralph
London have together worked on the project of producing a running
program for the correct generation of verification conditions for
the language PASCAL.  This work builds on an idea of C.A.R. Hoare
[3].  
	In addition to the verification generation (VG) Dr. Luckham
and J. Morales have used Dr. Luckham's theorem prover to prove
some of the theorems generated by the VG.

	Dr. Luckham has also supervised the thesis of J. Buchanan
[1] in which the notion of a semantic frame has been used to write
programs guaranteed correct by their method of generation.

	4.  Dr. Shigeru Igarashi [4] has explored the question of
extending the notion of the "admissibility" of Scott's computation
induction to a wider class of sentences than considered by Scott.
This work was presented at the MTC Symposium in Novosibirsk, U.S.S.R.

	5.  Dr. Robin Milner and Dr. Richard Weyhrauch completed
their study of definability in LCF.  They produced two papers.  In
[9] LCF was used to express the correctness of programs and a
machine checked proof of the correctness a particular program was
given.  In [8] a proof of the correctness of a compiler was presented.
Dr. Milner also presented an expository paper summerizing all this
work at the Symposium at Novosibirsk [7].  Subsequent to this along
with Dana Scott, work has been done on a version of LCF based on the
type free λ-calculus.  A set of axioms for such a system has been
found.

	6.  Dr. Robin Milner has been studying the theory of processes
using the type free λ-calculus.  This work is still in a preliminary
stage.

	7.  Dr. Richard Weyhrauch has been looking at formal proofs
in several different languages  (first order set theory, first order
logic, LCF) to see if various things learned from LCF carry over to
these languages.

	8.  Malcolm Newey has produced a set of theorems to be used
as a theorem base for people wanting to use LCF.  This includes
axioms for arithmetic, lists and finite sets, as well as about
900 theorems and proofs in these areas.  An A.I. Memo on these will
appear soon.

.SSS FUTURE WORK

	First there are people working on clarifying the extent of
Floyd's method of inductive assertions.  The main tool for this
study is the embryonic verification generator mentioned above and
Dr. Luckham's theorem prover.  To this end Dr. Luckham is planning an
interactive program that uses both.  The main task here is to
1) demonstrate that the formal system suggested by C.A.R. Hoare,
when extended to the full language PASCAL is correct and useable;
2) program the verification generator (VG), using Hoare's ideas,
to generate seetences, which if provable would guaraatee the
partial correctness of particular programs; 3) use the theorem
prover, perhaps with interactive help, to prove the seetences
generated by the VG.  The work mentioned in 3) is relevant to
this project and are presently continuing.  In addition a new
student, Nori Suzuki, is working on the addition of mechanisms
to the VG to help in proving the termination (or total correctness)
of programs.

	The second project concerns extending the ideas of D. Scott
and C. Strachey.  A major part of this research is a computer program,
LCF, which implements the logical calculus, in which the notions
of "function" and "functional" can be discussed.  Actually there are
now two such calculi, the original LCF [7,8,9] and the type free
logic (5 above).  The following projects are either already underway
or are contemplated in the near future.  A new machine version of
LCF.  This is necessitated for several reasons.  1) The usual run
of features left out; 2) most important a better interface with
other programs.  An examination of how to automate proving of
assertions in LCF.  For example, could the proofs given in [8,9]
have been automated.  Drs. Milner and Weyhrauch have several ideas
along these lines and it is likely that Steve Ness will undertake
this problem as a thesis topic.  Malcolm Newey, another student,
hopes to actually use LCF to prove formally the correctness of
LCOM0 and LCOM4.  An informal proof of their correctness was
presented in [5].  Both of these tasks will require the discussion
of how to create subsidiary deduction rules.  There is also the
question (reflecting on 6 above) of the implementation of a
version of LCF for the type free logic.  Whether this will be done
depends on further theoretical work to show its necessity.

	An atteept to automate either of the above ways of proving
properties of programs requires one to talk about proving theorems
in some formal language.  Another project envisioned is a new
first order logic proof checker which is based on theories of
natural deduction, rather than resolution.  One of the as yet
unexplored discoveries of the work of [8,9] was the effect of
the physical structure of LCF on the feasibility of carrying out
proofs.  Dr. Weyhrauch hopes to examine this question in terms of
natural deduction in general.

	These projects are all directed at the more central
questions, what is the right notion of correctness and equivalence
of programs.  In addition they address themselves to examining
the computational feasibility of automating the procedures
suggested by the theoretical results.  It is hoped that these
studies will eventually lead to a clear notion of correctness that
lends itself to automatic (though perhaps interactive) checking.





.SSS REFERENCES
.BEGIN INDENT 0,5
	[1]  Buchanan, J., Thesis, to appear

	[2]  Cadiou, J.M., "Recursive definitions of partial functions
and their computations", Stanford Artificial Intelligence Project
AIM-163, March 1972.

	[3]  Hoare, C.A.R., "An axiomatic basis of computer programming",
CACM, Volume 12, 576-80, 583, 1969.

	[4]  Igarashi, S., "Admissibility of fixed-point induction
in first-order logic of typed theories", Stanford Artificial
Intelligence Project, AIM-168, May 1972.

	[5]  London, R.L., "Correctness of two compilers for a LISP
subset", Stanford Artificial Intelligence Project, AIM-151, October
1971.

	[6]  Manna, Z., INTRODUCTION TO MATHEMATICAL THEORY OF
COMPUTATION, Stanford University Computer Science Department
May 1972.

	[7]  Milner, R. (paper given at Novosibirsk), 1972.

	[8]  Milner, R. & Weyhrauch, R., "Proving compiler correctness
in a mechanized logic", Machine Intelligence 7, Michie, E. (ed.),
Edinburgh, Edinburgh University Press, 1972.

	[9]  Weyhrauch, W. & Milner, R., "Program semantics and
correctness in a mechanized logic", First USA-Japan Computer
Conference Proceeedings, Tokyo, Hitashi Printing Company, October
1972.

.END
.SS AUTOMATIC DEDUCTION

Current research staff:  David Luckham, John Allen, Jack Buchanan
Jorge Morales and Nori Suzuki.

Research  over  the past year has been directed very strongly towards
developing the applications of automatic proof procedures, especially
in   the  areas  of  mathematical  theorem-proving,  verification  of
computer programs, and automatic generation of programs.   This  work
has  involved  extensive changes and additions to the theorem-proving
program (references include previous A.I. Project reports,and [l,  2,
and  3], and the development of new programs using special features of
the languages MLISP2 [4] and MICRO-PLANNER [5]).  We give  some  brief
details of current work and future plans below.

Most of these projects have involved the discussion of many  problems
(both  theoretical  and  practical)  in  the  mathematical  theory of
computation, and this has resulted in a  great  deal  of  cooperation
between  the  two  groups of people.  In particular, discussions with
Robin Milner and Richard Weyhrauch  on  their  experiments  with  LCF
helped  the  program  verification  project,  and Richard Weyhrauch's
experiments in proving theorems in various formal systems speeded the
extension  of  user  facilities  in  the  theorem-prover.   Different
projects also turned out to have similar practical problems; e.g. the
simplification algorithms  in  LCF  and  in  the  theorem-prover  are
similar,  and  the  data  files built up by M. Newey for carrying out
proofs in LCF have a significant overlap with the files of facts used
by  J.  Morales  in  the program verification system.  As a result of
this extensive common  ground  of  interests  and  problems,  a  more
integrated  development  of  the  various projects is planned for the
future.


.SSS THEOREM PROVING

The  basis of the theorem-proving effort has been the proof procedure
for first-order logic with equality, originally developed by Allen and Luckham  [1].   This
has been extensively modified by J. Allen in recent months; the basic
theorem-proving program has been speeded up by a  factor  of  20,  an
input-output  language allowing normal mathematical notation has been
added, and the program will select search strategies automatically if
the user wishes (we refer to this as automatic mode).  As a result
it is possible for the program to be used by a person who is  totally
unfamiliar  with  the  theory  of  the  Resolution  principle and its
associated rules of inference and refinements. This program has  been
used  to obtain proofs of several different research announcements in
the Notices of the American Mathematical Society, for example, [6, 7,
and  8].  Recently, (July 1972) J. Morales learned to use the program
essentially by using it to obtain proofs of the results stated in [8,
June 1972] as an exercise. In the course of doing this he was able to
formulate  simple  ways  of using the prover to generate possible new
theorems in the same spirit as  [8],  and  did  in  fact  succeed  in
extending  the  results of [8].  Furthermore, he was able to send the
authors proofs of their results before they had actually had time  to
write  them  up [R.H. Cowen, private correspondence, August 9, 1972].
The prover is also being used as part  of  the  program  verification
system (below).

At  present  some  modifications  in  the  prover's  standard  search
strategies  are  being  made  by J. Allen.  A version of this program
with user documentation is  being  prepared  for  distribution to other research groups.   The
addition  of  a  language in which the user can specify his intuition
about how a proof of a given statement might possibly be obtained, is
in progress. J. Allen  has  already  programmed  a  very  preliminary
version  of  this  "HUNCH"  language,  and  has completed the systems
debugging necessary  to  get  a  compiled  version  of  J.  Sussman's
CONNIVER  language running here; HUNCH language may be implemented in
CONNIVER, but  discussions  on  this  point  are  not  yet  complete.
Initially,  it is expected that HUNCH language will be useful
in  continuing  with  more  difficult  applications  in   first-order
mathematical  theories.   A  research report outlining the results of
experiments made over the past two years  is  being  prepared  by  J.
Allen  and  D.  Luckham.   A  report  on  J.  Morales' experiments is
available.

.SSS VERIFICATION OF COMPUTER PROGRAMS

This project was undertaken jointly by Shigeru Igarashi, Ralph London
and  David  Luckham.   The aim is to construct a system for verifying
automatically  that  a  computer  program  is  consistent  with   its
documentation.   It  was  decided  to  restrict attention to programs
written in N. Wirth's language PASCAL [9] since this was the  subject
of  an  axiomatic  study  by  C.A.R.  Hoare  [10].   Essentially  the
verification system has three main  components,  (i)  a  Verification
Condition  Generator  (VCG),  (ii) a simplifier, and (iii) a theorem-
prover.  A first version of VCG has been implemented in MLISP2; it is
a fairly straightforward encoding of a set of rules of inference that
is  derivation-complete  for  the  system given by Hoare in [10]. VCG
will accept  a  PASCAL  program  (including  go  to's,  conditionals,
while's,  procedure  and  function calls with certain restrictions on
the types of the actual parameters of procedure calls,  and  arrays),
together  with  a  documentation,  and  will output a set of "lemmas" or
conditions that, if provable, will ensure that  relative  consistency
holds  true.   The  simplifier,  which  is  to process the conditions
output by VCG, has not yet been written.  Initially,  proofs  of  the
conditions  for  simple arithmetical programs and for certain sorting
programs involving array operations, were obtained by D. Luckham and J. Allen
using the theorem-proving  program.  A more systematic
series of experiments in which proofs of verification conditions were
obtained from standard data files using the prover in automatic mode,
have been made by J. Morales.

These  experiments are continuing.  It is planned to add new rules of
inference to the theorem-prover and  to  add  some  simple  debugging
strategies  for  pointing  out  possible relative inconsistencies.  A
more flexible version of VCG is planned which will allow the user  to
specify  his  own  rules  of  inference  for  an extension of Hoare's
system.  A preliminary version of the simplifier is essential if  the
whole  verification  process  is  to  be made fully automatic for any
reasonable class of programs.  People currently engaged in this  work
include  D. Luckham,  J. Morales,  N. Suzuki,  and  R. London (latter
at ISI, University of Sourthern California).  A research report  on  VCG  is  being  prepared  by
S. Igarashi and R. London and D. Luckham.


.SSS AUTOMATIC PROGRAM GENERATION

We mention this work briefly here since it  is  an  outgrowth  of  an
attempt to extend the applicability of the theorem-prover to problems
in Artificial Intelligence, and makes use of a particular notion of a
problem  environment  (called  a "semantic frame") which was designed
for that original purpose.  However, the system as it now stands,  is
best thought of as a heuristic theorem-prover for a subset of Hoare's
logical system.  It has been implemented in LISP using the  backtrack
features  of  Micro-Planner,  by  J.  Buchanan. It accepts as input a
problem environment and a problem and, if successful, gives as output
a  program  for  solving  the  problem.   At  the  moment, the output
programs  are composed of the primitive operators of the environment,
assignments,  conditional  branches,   while's,   and   non-recursive
procedure  calls.   This system has been used to generate progams for
solving various  problems  to  do  with  robot  control,  conditional
planning,  and  for  computing  arithmetical  functions.   A research
report outlining the logical basis for the  system  and  its  overall
capabilities  is  in  preparation by D. Luckham and J. Buchanan.  The
details of its implementation will be available  in  J.R.  Buchanan's
Ph.D. Thesis (also in preparation).


.SSS REFERENCES
.BEGIN INDENT 0,5
[1]  John Allen and David Luckham (1970), An Interactive Theorem-
     Proving Program, Proc. Fifth International Machine Intelligence
     Workshop, Edinburgh University Press, Edinburgh.

[2]  Richard B. Kieburtz and David Luckham, "Compatibility and
     Complexity of Refinements of the Resolution Principle",
     SIAM J. Comput., Vol. 1, No. 4, December 1972.

[3]  David Luckham and Nils J. Nilsson, "Extracting Information
     from Resolution Proof Trees", Artificial Intelligence, Vol. 2,
     No. 1, pp. 27-54, Spring 1971.

[4]  David C. Smith, "MLISP", Computer Science Department, Artificial
     Intelligence Memo No. AIM-135, October 1970.

[5]  Sussman and R. Winograd, Micro Planner Manual, Project MAC.

[6]  Chinthayamma (1969), Sets of Independent Axioms for a Ternary
     Boolean Algebra, Notices Amer. Math. Soc., 16, p. 654.

[7]  E.L. Marsden, "A Note on Implicative Models", Notices Amer.
     Math. Soc. No. 682-02-7, p. 89, January 1971.

[8]  Robert H. Cowen, Henry Frisz, Alan Grenadir, "Some New
     Axiomatizations in Group Theory", Preliminary Report, Notices
     Amer. Math. Soc., No. 72T-112, p. A547, June 1972.

[9]  N. Wirth, "The Programming Language PASCAL", ACTA Informatica
     I 1., p. 35-63, 1971.

[10] C.A.R. Hoare, "An Automatic Approach to Computer Programming",
     Commun ACM. 12, 10 576-580, 583, October 1969.
.END
.SS FACILITIES

The computer  facilities  of  the  Stanford  Artificial  Intelligence
Laboratory include the following equipment.

.BEGIN VERBATIM
Central Processors:  Digital Equipment Corporation PDP-10 and PDP-6

Primary Store:       65K words of 1.7 microsecond DEC Core
	             65K words of 1 microsecond Ampex Core
                     131K words of 1.6 microsecond Ampex Core

Swapping Store:      Librascope disk (5 million words, 22 million
                     bits/second transfer rate)

File Store:          IBM 3330 disc file, 6 spindles (leased)

Peripherals:         4 DECtape drives, 2 magnetic tape drives
	             (7 track), line printer, Calcomp plotter,
		     Xerox Graphics Printer

Terminals:           58 TV displays, 6 III displays, 3 IMLAC
	             displays, 1 ARDS display, 15 Teletype terminals.

Special  Equipment:  Audio  input  and  output  systems, hand-eye
                     equipment (2 TV cameras, 3 arms), remote-
                     controlled cart
.END
The facility operates nearly 24 hours of  every  day.   It  typically
carries  a  load of forty-some jobs throughout the day and seldom has
less than eight active users all night long.

It is expected that the new  high  speed  processor  currently  being
fabricated  in  our  laboratory  will  become  the  main  timesharing
processor during the early part of the proposal period.  This  should
provide  adequate  computational  support  for  the proposed research
program.

It will be necessary to purchase  some  special  instrumentation  and
peripheral  computer equipment in support of our research in computer
vision and manipulation.  Thus, $75,000 per year  has  been  budgeted
for capital equipment, based on recent experience.
.SS PROJECT PUBLICATIONS
.SSS RECENT ARTICLES AND BOOKS
.BEGIN COUNT REF FROM 1 TO 100; AT ">>" ⊂ APART GROUP NEXT REF; REF!}.∂5 {⊃

Articles and books by members of the Stanford Artificial Intelligence
Laboratory are listed below.
.INDENT 0,5

>> E. Ashcroft and Z. Manna,"Formalization of Properties of Parallel Programs",
  Machine Intelligence 6, Edinburgh Univ. Press, 1971.

>> E. Ashcroft and Z. Manna, "The Translation of `Go To' Programs to `While'
 Programs", Proc. IFIP Congress 1971.

>> K. M. Colby, S. Weber, F. D. Hilf, "Artificial Paranoia", J. Art. Int.,
Vol. 2, No. 1, 1971.

>> G. Falk, "Scene Analysis Based on Imperfect Edge Data", Proc.
 2IJCAI, Brit. Comp. Soc., Sept. 1971.

>> J. A. Feldman and A. Bierman, "A Survey of Grammatical Inference",
  Proc. International Congress on Pattern Recognition, Honolulu,
  January 1971, also in S, Watanbe (ed.), FRONTIERS OF PATTERN
  RECOGNITION, Academic Press, 1972.

>> J. Feldman and R. Sproull, "System Support for the Stanford
 Hand-eye System", Proc. 2IJCAI, Brit. Comp. Soc., Sept. 1971.

>> J. Feldman, et al, "The Use of Vision and Manipulation to Solve
 the `Instant Insanity' Puzzle", Proc. 2IJCAI, Brit. Comp. Soc.,
 Sept. 1971.

>> R. W. Floyd, "Toward Interactive Design of Correct Programs",
  Proc. IFIP Congress 1971, pp. 1-5.

>> A. Hearn, "Applications of Symbol Manipulation in Theoretical
 Physics", Comm. ACM, August 1971.

>> F. Hilf, K. Colby, D. Smith, W. Wittner, W. Hall,
  "Machine-Mediated Interviewing", J. Nervous & Mental Disease,
  Vol. 152, No. 4, 1971.

>> M. Hueckel, "An Operator which Locates Edges in Digitized
  Pictures", JACM, January 1971.

>> M. Kahn and B. Roth, "The Near-minimum-time Control of Open-loop
 Articulated Kinematic Chains", Trans. ASME, Sept. 1971.

>> R. Kling, "A Paradigm for Reasoning by Analogy", Proc. 2IJCAI,
 Brit. Comp. Soc., Sept. 1971.

>> D. Knuth, "An Empirical Study of FORTRAN Programs", Software --
  Practice and Experience, Vol. 1, 105-133, 1971.

>> D. Luckham and N. Nilsson, "Extracting Information from Resolution
Proof Trees", Artificial Intelligence Journal, Vol. 2, No. 1, pp. 27-54,
June 1971.

>> Z. Manna (with R. Waldinger), "Toward Automatic Program Synthesis",
  Comm. ACM, March 1971.

>> Z. Manna, "Mathematical Theory of Partial Correctness", J. Comp.
 & Sys. Sci., June 1971.

>> R. Milner, "An Algebraic Definition of Simulation between
 Programs", Proc. 2IJCAI, Brit. Comp. Soc., Sept. 1971.

>> U. Montanari, "On the Optimal Detection of Curves in Noisy
  Pictures", Comm. ACM, May 1971.

>> N. Nilsson, "Problem-solving Methods in Artificial Intellegence",
  McGraw-Hill, New York, 1971.

>> R. Paul, "Trajectory Control of a Computer Arm", Proc. 2IJCAI,
 Brit. Comp. Soc., Sept. 1971.

>> K. Pingle and J. Tenenbaum, "An Accomodating Edge Follower",
 Proc. 2IJCAI, Brit. Comp. Soc., Sept. 1971.

>> B. Roth, "Design, Kinematics, and Control of Computer-controlled
  Manipulators", Proc. 6th All Union Conference on New Problems in
  Theory of Machines & Mechanics, Leningrad, Jan. 1971.

>> R. Schank, "Finding the Conceptual Content and Intention in an
 Utterance in Natural Language Conversation", Proc. 2IJCAI, Brit.
 Comp. Soc., 1971.

>> J. Tenenbaum, et al, "A Laboratory for Hand-eye Research", Proc.
 IFIP Congress, 1971.
>> A. W. Biermann and J. A. Feldman, "On the Synthesis of Finite-state Machines from Samples
  of Their Behavior", IEEE Transactions on Computers, Vol. C-21, No. 6,
  pp. 592-596, June 1972.

>> A. W. Biermann, "On the Inference of Turing Machines from Sample
Computations", Artificial Intelligence J., Vol. 3, No. 3, Fall 1972.

>> J. M. Cadiou and Z. Manna, "Recursive Definitions of Partial Functions
and their Computations", ACM SIGPLAN Notices, Vol. 7, No. 1, January 1972.

>> K. M. Colby, F. D. Hilf, S. Weber, and H. C. Kraemer, "Turing-like
Indistinguishability Tests for the Validation of a Computer Simulation
of Paranoid Processes", Artificial Intelligence J., Vol. 3, No. 3, Fall 1972.

>> G. Falk, "Interpretation of Imperfect Line Data as a Three-Dimensional
Scene", J. Artificial Intelligence, Vol. 3, No. 2, 1972.

>> J. A. Feldman, "Some Decidability Results on Grammatical Inference
  and Complexity", Information and Control, Vol. 20, No. 3, pp. 244-262,
  April 1972.

>> J. A. Feldman, J. R. Low, D. C. Swinehart, and R. H. Taylor, "Recent
  Developments in SAIL, an ALGOL-based language for Artificial
  Intelligence", Proc. Fall Joint Computer Conference, 1972.

>> S. J. Garland and D. C. Luckham, "Translating Recursive Schemes into
Program Schemes", ACM SIGPLAN Notices, Vol. 7, No. 1, January 1972.

>> J. Gips, "A New Reversible Figure", Perceptual & Motor Skills, 34, 306, 1972.

>> Richard B. Kieburtz and David Luckham, "Compatibility and Complexity
of Refinements of the Resolution Principal", SIAM J. Comput., Vol. 1, No. 4,
December 1972.

>> D. E. Knuth, "Ancient Babylonian Algorithms", Comm. ACM, July 1972.

>> R. L. London, "Correctness of a Compiler for a LISP Subset", ACM SIGPLAN
Notices, Vol. 7, No. 1, January 1972.

>> Z. Manna, S. Ness, and J. Vuillemin, "Inductive Methods for Proving
Properties of Programs", ACM SIGPLAN Notices, Vol. 7, No. 4, January 1972.

>> Z. Manna and J. Vuillemin, "Fixpoint Approach to the Theory of
Computation", Comm. ACM, July 1972.

>> R. Milner, "Implementatiion and Application of Scott's Logic for Computable
Functions", ACM SIGPLAN NOTICES, Vol. 7, No. 1, January 1972.

>> James A. Moorer, "Music and Computer Composition", Comm. ACM, January
1972.
.APART END
.SKIP TO COLUMN 1
.SSS THESES

Theses that have been published by the Stanford Artificial Intelligence
Laboratory are listed here.  Several earned degrees at institutions
other than Stanford, as noted.

.BEGIN INDENT 0,7;
AIM-43, R. Reddy, AN  APPROACH  TO  COMPUTER  SPEECH  RECOGNITION  BY
     DIRECT  ANALYSIS  OF  THE  SPEECH WAVE, Ph.D. Thesis in Computer
     Science, September 1966.

AIM-46, S. Persson, SOME SEQUENCE EXTRAPOLATING PROGRAMS:  A STUDY OF
     REPRESENTATION  AND  MODELING IN INQUIRING SYSTEMS, Ph.D. Thesis
     in  Computer  Science,  University  of   California,   Berkeley,
     September 1966.

AIM-47,  B. Buchanan, LOGICS OF SCIENTIFIC DISCOVERY, Ph.D. Thesis in
     Philosophy, University of California, Berkeley, December 1966.

AIM-44, J.  Painter,  SEMANTIC  CORRECTNESS  OF  A  COMPILER  FOR  AN
     ALGOL-LIKE  LANGUAGE,  Ph.D.   Thesis in Computer Science, March
     1967.

AIM-56, W. Wichman, USE OF OPTICAL FEEDBACK IN THE  COMPUTER  CONTROL
     OF AN ARM, Eng. Thesis in Electrical Engineering, August 1967.

AIM-58,  M. Callero, AN ADAPTIVE COMMAND AND CONTROL SYSTEM UTILIZING
     HEURISTIC  LEARNING  PROCESSES,  Ph.D.   Thesis  in   Operations
     Research, December 1967.

AIM-60,   D.   Kaplan,   THE  FORMAL  THEORETIC  ANALYSIS  OF  STRONG
     EQUIVALENCE FOR ELEMENTAL PROPERTIES, Ph.D. Thesis  in  Computer
     Science, July 1968.

AIM-65, B. Huberman, A PROGRAM TO PLAY CHESS END GAMES, Ph.D.  Thesis
     in Computer Science, August 1968.

AIM-73, D.  Pieper, THE KINEMATICS  OF  MANIPULATORS  UNDER  COMPUTER
     CONTROL, Ph.D. Thesis in Mechanical Engineering, October 1968.

AIM-74,  D. Waterman, MACHINE LEARNING OF HEURISTICS, Ph.D. Thesis in
     Computer Science, December 1968.

AIM-83, R.  Schank, A  CONCEPTUAL  DEPENDENCY  REPRESENTATION  FOR  A
     COMPUTER  ORIENTED  SEMANTICS,  Ph.D.   Thesis  in  Linguistics,
     University of Texas, March 1969.

AIM-85, P.  Vicens, ASPECTS OF SPEECH RECOGNITION BY COMPUTER,  Ph.D.
     Thesis in Computer Science, March 1969.

AIM-92,   Victor   D.    Scheinman,  DESIGN  OF  COMPUTER  CONTROLLED
     MANIPULATOR, Eng. Thesis in Mechanical Engineering, June 1969.

AIM-96, Claude Cordell Green, THE APPLICATION OF THEOREM  PROVING  TO
     QUESTION-ANSWERING   SYSTEMS,   Ph.D.     Thesis  in  Electrical
     Engineering, August 1969.

AIM-98, James J.  Horning, A STUDY OF  GRAMMATICAL  INFERENCE,  Ph.D.
     Thesis in Computer Science, August 1969.

AIM-106,  Michael E. Kahn, THE NEAR-MINIMUM-TIME CONTROL OF OPEN-LOOP
     ARTICULATED  KINEMATIC  CHAINS,  Ph.D.   Thesis  in   Mechanical
     Engineering, December 1969.

AIM-121, Irwin Sobel, CAMERA MODELS  AND  MACHINE  PERCEPTION,  Ph.D.
     Thesis in Electrical Engineering, May 1970.

AIM-130,  Michael  D.   Kelly,  VISUAL  IDENTIFICATION  OF  PEOPLE BY
     COMPUTER, Ph.D. Thesis in Computer Science, July 1970.

AIM-132, Gilbert Falk, COMPUTER INTERPRETATION OF IMPERFECT LINE DATA
     AS  A  THREE-DIMENSIONAL  SCENE,  Ph.D.   Thesis  in  Electrical
     Engineering, August 1970.

AIM-134, Jay Martin  Tenenbaum,  ACCOMMODATION  IN  COMPUTER  VISION,
     Ph.D. Thesis in Electrical Engineering, September 1970.

AIM-144,  Lynn H. Quam, COMPUTER COMPARISON OF PICTURES, PhD Thesis in
     Computer Science, May 1971.

AIM-147, Robert E. Kling, REASONING BY ANALOGY WITH APPLICATIONS TO
     HEURISTIC PROBLEM SOLVING:  A CASE STUDY, PhD Thesis in Computer
     Science, August 1971.

AIM-149, Rodney Albert Schmidt, Jr., A STUDY OF THE REAL-TIME CONTROL
     OF A COMPUTER-DRIVEN VEHICLE, PhD Thesis in Electrical Engineering,
     August 1971.

AIM-155, Jonathan Leonard Ryder, HEURISTIC ANALYSIS OF LARGE TREES AS
     GENERATED IN THE GAME OF GO, PhD Thesis in Computer Science,
     December 1971.

AIM-163, J.M. Cadiou, RECURSIVE DEFINITIONS OF PARTIAL FUNCTIONS  AND
     THEIR COMPUTATIONS, PhD Thesis in Computer Science, April 1972.

AIM-173, Gerald Jacob Agin, REPRESENTATION AND DESCRIPTION OF CURVED
     OBJECTS, PhD Thesis in Computer Science, October 1972.

AIM-174, Morris, Francis Lockwood, CORRECTNESS OF TRANSLATIONS OF
     PROGRAMMING LANGUAGES--AN ALGEBRAIC APPROACH, PhD Thesis in
     Computer Science, August 1972.

AIM-177, Paul, Richard, MODELLING, TRAJECTORY CALCULATION AND SERVOING
     OF A COMPUTER CONTROLLED ARM, PhD Thesis in Computer Science,
     (forthcoming)

AIM-178, Gill, Aharon, VISUAL FEEDBACK AND RELATED PROBLEMS IN COMPUTER
     CONTROLLED HAND EYE COORDINATION, PhD Thesis in Electrical
     Engineering, October 1972.

AIM-180, Bajcsy, Ruzena, COMPUTER IDENTIFICATION OF TEXTURED VISIUAL
     SCENES, PhD Thesis in Computer Science, October 1972.
.END
.SKIP TO COLUMN 1
.SSS FILM REPORTS

Prints of the following film reports are available for loan to interested
groups.
.BEGIN VERBATIM

1.  Art Eisenson and Gary Feldman, "Ellis D. Kroptechev and Zeus, his
      Marvelous  Time-Sharing  System",  16mm  black  and  white with
      sound, 15 minutes, March 1967.

The advantages of time-sharing over  standard  batch  processing  are
revealed  through the good offices of the Zeus time-sharing system on
a PDP-1 computer.  Our hero, Ellis, is saved from a fate  worse  than
death.   Recommended for mature audiences only.

2.  Gary Feldman, "Butterfinger", 16mm color with sound,  8  minutes,
      March 1968.

Describes  the  state  of  the  hand-eye  system  at  the  Artificial
Intelligence Project in the fall of 1967.  The PDP-6 computer getting
visual information  from  a  television  camera  and  controlling  an
electrical-mechanical  arm  solves  simple  tasks  involving stacking
blocks.  The techniques of recognizing the blocks and their positions
as well as controlling the arm are briefly presented.  Rated "G".

3.  Raj Reddy, Dave Espar and Art Eisenson, "Hear Here",  16mm  color
      with sound, 15 minutes, March 1969.

Describes the state of the speech recognition project as  of  Spring,
1969.  A discussion of the problems of speech recognition is followed
by two real time demonstrations of the  current  system.   The  first
shows the computer learning to recognize phrases and second shows how
the hand-eye system may be controlled by voice commands.  Commands as
complicated  as  "Pick  up  the  small  block  in  the lower lefthand
corner", are recognized and the tasks are carried out by the computer
controlled arm.

4.  Gary Feldman and Donald Peiper, "Avoid", 16mm  silent,  color,  5
      minutes, March 1969.

Reports on a computer program written by  D.  Peiper  for  his  Ph.D.
Thesis.    The   problem   is   to   move   the  computer  controlled
electrical-mechanical arm through a space filled  with  one  or  more
known  obstacles.   The  program  uses  heuristics for finding a safe
path; the film demonstrates the  arm  as  it  moves  through  various
cluttered environments with fairly good success.

5.  Richard Paul and Karl Pingle, "Instant Insanity",  16  mm  color,
      silent, 6 minutes, August, 1971.

Shows the hand/eye system  solving  the  puzzle  "Instant  Insanity".
Sequences  include  finding  and recognizing cubes, color recognition
and object manipulation.  This film was made  to  accompany  a  paper
presented  at  the  1971 International Joint Conference on Artificial
Intelligence in London.  It may  be  hard  to  understand  without  a
narrator.

6.  Suzanne Kandra, "Motion and  Vision",  16  mm  color,  sound,  22
      minutes, November 1972.

A  technical  presentation  of  three  research projects completed in
1972:  advanced arm control by R. P. Paul [AIM-177],  visual feedback
control  by  A. Gill [AIM-178], and representation and description of
curved objects by G. Agin [AIM-173].
.END
.SKIP TO COLUMN 1
.SSS RECENT REPORTS

Abstracts of recent Artificial Intelligence Memos are given below.

.BEGIN VERBATIM
                                1971

AIM-140,   Roger   C.   Schank,   INTENTION,   MEMORY,  AND  COMPUTER
	UNDERSTANDING, January 1971, 59 pages.

Procedures  are  described for discovering the intention of a speaker
by relating the Conceptual Dependence representation of the speaker's
utterance to the computer's world model such that simple implications
can be made.  These procedures function at levels higher than that of
structure of the memory.  Computer understanding of natural  language
is  shown  to consist of the following parts:  assigning a conceptual
representation to an  input;  relating  that  representation  to  the
memory such as to extract the intention of the speaker; and selecting
the correct response type triggered by such an utterance according to
the situation.

AIM-141,  Bruce  G.  Buchanan,  Edward  A.  Feigenbaum,  and   Joshua
	Lederberg,  THE  HEURISTIC  DENDRAL   PROGRAM  FOR EXPLAINING
	EMPIRICAL DATA, February 1971, 20 pages.

The Heuristic DENDRAL program uses an information processing model of
scientific   reasoning   to  explain  experimental  data  in  organic
chemistry. This report summarizes the organization and results of the
program  for  computer scientists.  The program is divided into three
main parts: planning, structure generation, and evaluation.

The planning phase infers constraints on the search  space  from  the
empirical  data  input to the system.  The structure generation phase
searches a tree whose termini are models  of  chemical  models  using
pruning  heuristics of various kinds.  The evaluation phase tests the
candidate structures against  the  original  data.   Results  of  the
program's analyses of some tests are discussed.

AIM-142,  Robin Milner, AN ALGEBRAIC DEFINITION OF SIMULATION BETWEEN
	PROGRAMS, February 1971, 21 pages.

A  simulation  relation  between  programs  is   defined   which   is
quasi-ordering.    Mutual simulation is then an equivalence relation,
and by dividing out by it we abstract from a program such details  as
how  the  sequencing  is controlled and how data is represented.  The
equivalence classes are approximations to the  algorithms  which  are
realized, or expressed, by their member programs.

A  technique  is  given  and  illustrated  for proving simulation and
equivalence of programs; there is an analogy with  Floyd's  technique
for   proving   correctness  of  programs.   Finally,  necessary  and
sufficient conditions for simulation are given.

AIM-143.  John McCarthy, Arthur Samuel  and  Artificial  Intelligence
	Project staff,  Edward  Feigenbaum   and   Heuristic  Dendral
	Project  staff,    PROJECT  TECHNICAL  REPORT,   MARCH  1971,
	80 pages, (out of print).

An  overview  is  presented  of  current  research  at  Stanford   in
artificial  intelligence  and  heuristic programming.  This report is
largely the text of a proposal  to  the  Advanced  Research  Projects
Agency for fiscal years 1972-3.

AIM-144.  Lynn H. Quam, COMPUTER COMPARISON OF PICTURES,   May  1971,
	120 pages.

This   dissertation  reports  the  development  of  digital  computer
techniques  for  detecting  changes  in  scenes  by  normalizing  and
comparing  pictures  which were taken from different camera positions
and under different conditions  of  illumination.  The  pictures  are
first  geometrically normalized to a common point of view.  Then they
are photometrically normalized to eliminate the  differences  due  to
different   illumination,  camera  characteristics,  and  reflectance
properties of the scene due to different sun and view angles.   These
pictures  are  then  geometrically registered by maximizing the cross
correlation  between  areas  in  them.   The  final  normalized   and
registered pictures are then differenced point by point.

The  geometric  normalization  techniques require relatively accurate
geometric models for the camera and the  scene,  and  static  spatial
features  must  be present in the pictures to allow precise geometric
alignment using the technique of cross correlation maximization.

Photometric normalization also requires a relatively  accurate  model
for  the  photometric response of the camera, a reflectance model for
the scene (reflectance as a function of the  illumination  view,  and
phase  angles)  and  some  assumptions about the kinds of reflectance
changes which are to be detected.

These techniques have been incorporated in  a  system  for  comparing
Mariner 1971 pictures of Mars to detect variable surface phenomena as
well as color and polarization  differences.   The  system  has  been
tested using Mariner 6 and 7 pictures of Mars.

Although the techniques described in this dissertation were developed
for Mars pictures, their use is  not  limited  to  this  application.
Various  parts  of  this  software  package,  which was developed for
interactive use on the time-sharing system of the Stanford Artificial
Intelligence Project, are currently being applied to other scenery.


AIM-145,  Bruce  G.  Buchanan,  Edward  A.  Feigenbaum,  and   Joshua
	Lederberg, A HEURISTIC PROGRAMMING STUDY OF THEORY  FORMATION
	IN SCIENCE, June 1971, 41 pages.

The Meta-DENDRAL program is a a  vehicle  for  studying  problems  of
theory formation in science.  The general strategy of Meta-DENDRAL is
to reason from data to plausible generalizations and then to organize
the  generalizations  into  a unified theory.  Three main subproblems
are discussed: (1) explain the experimental data for each  individual
chemical structure, (2) generalize the results from each structure to
all structures, and (3) organize the generalizations into  a  unified
theory.   The  program  is  built  upon  the  concepts and programmed
routines already  available  in  the  Heuristic  DENDRAL  performance
program,  but  goes  beyond  the performance program in attempting to
formulate the theory which the performance program will use.


AIM-146, Andrei P. Ershov, PARALLEL PROGRAMMING, July 1971, 14 pages.

This  report is based on lectures given at Stanford University by Dr.
Ershov in November, 1970.


AIM-147,  Robert  E. Kling, REASONING BY ANALOGY WITH APPLICATIONS TO
	HEURISTIC  PROBLEM  SOLVING:   A  CASE  STUDY,   August 1971,
	191 pages.

An  information-processing  approach  to  reasoning  by  analogy   is
developed  that  promises  to  increase  the  efficiency of heuristic
deductive problem-solving systems.  When a deductive  problem-solving
system accesses a large set of axioms more than sufficient particular
problem,  it  will  often  create  many  irrelevant  deductions  than
saturate the memory of the problem solver.

Here,  an  analogy  with  some  previously  solved  problem and a new
unsolved problem is used to restrict the data base to a small set  of
appropriate  axioms.  This procedure (ZORBA) is studied in detail for
a resolution theorem proving system.  A set of  algorithms  (ZORBA-I)
which  automatically  generates  an  analogy  between  a new unproved
theorem, T↓A, and a previously proved theorem,  T,  is  described  in
detail. ZORBA-I is implemented in LISP on a PDP-10.

ZORBA-I is examined in terms of its empirical performance on parts of
analogous theorems drawn from abstract algebra.   Analytical  studies
are  included  which show that ZORBA-I can be useful to aid automatic
theorem proving in many pragmatic cases while it may be  a  detriment
in certain specially contrived cases.


AIM-148, Edward Ashcroft, Zohar Manna,  and  Amir  Pneuli,  DECIDABLE
	PROPERTIES OF MONADIC FUNCTIONAL SCHEMAS,  July  1971,
	10 pages.

We  define  a  class  of  (monadic) functional schemas which properly
include `Ianov' flowchart schemas.  We  show  that  the  termination,
divergence and freedom problems for functional schemas are decidable.
Although it is possible  to  translate  a  large  class  of  non-free
functional  schemas  into equivalent free functional schemas, we show
that this  cannot  be  done  in  general.   We  show  also  that  the
equivalence  problem  for free functional schemas is decidable.  Most
of  the  results  are  obtained from  well-known  results  in  Formal
Languages and Automata Theory.


AIM-149, Rodney Albert Schmidt, Jr., A STUDY OF THE REAL-TIME CONTROL
	OF A COMPUTER-DRIVEN VEHICLE, August 1971, 180 pages.

Vehicle  control  by  the  computer  analysis  of  visual  images  is
investigated.   The  areas  of  guidance,  navigation,  and  incident
avoidance  are  considered.  A television camera is used as the prime
source of visual image data.

In the guidance system developed for an experimental vehicle,  visual
data  is  used to gain information about the vehicle system dynamics,
as well as to guide the vehicle.  This information is  used  in  real
time  to  improve performance of the non-linear, time-varying vehicle
system.

A scheme for navigation by pilotage through the  recognition  of  two
dimensional  scenes  is developed.  A method is proposed to link this
to a computer-modelled map in order to make journeys.

Various difficulties in avoiding anomolous incidents in the automatic
control  of  automobiles are discussed, together with suggestions for
the application of this  study  to  remote  exploration  vehicles  or
industrial automation.


AIM-150, Robert  W.  Floyd,  TOWARD  INTERACTIVE  DESIGN  OF  CORRECT
	PROGRAMS, September 1971, 12 pages.

We  propose  an  interactive  system  proving  the  correctness  of a
program, or locating errors, as the program is designed.

AIM-151, Ralph L. London, CORRECTNESS OF TWO  COMPILERS  FOR  A  LISP
	SUBSET, October 1971, 41 pages.

Using  mainly  structural induction, proofs of correctness of each of
two running  Lisp  compilers  for  the  PDP-10  computer  are  given.
Included  are the rationale for presenting these proofs, a discussion
of the proofs, and the changes  needed  to  the  second  compiler  to
complete its proof.


AIM-152, A.W. Biermann, ON THE  INFERENCE  OF  TURING  MACHINES  FROM
	SAMPLE COMPUTATIONS, October 1971, 31 pages.

An  algorithm is presented which when given a complete description of
a set of Turing machine computations finds a Turing machine which  is
capable of doing those computations.  This algorithm can serve as the
basis for designing a trainable  device  which  can  be  trained   to
simulate  any  Turing machine by being led through a series of sample
computations done by that machine.  A  number of examples  illustrate
the  use  of  the technique and the possibility of the application to
other types of problems.


AIM-153,  Patrick J. Hayes, THE FRAME PROBLEM AND RELATED PROBLEMS IN
	ARTIFICIAL INTELLIGENCE, November 1971, 18 pages.

The frame problem arises in considering the logical  structure  of  a
robot's beliefs.  It has been known for some years, but only recently
has much progress been made.  The problem is described and discussed.
Various   suggested  methods  for  its  solution  are  outlines,  and
described in a uniform notation.   Finally,  brief  consideration  is
given  to  the  problem  of  adjusting a belief system in the face of
evidence which contradicts beliefs.  It is shown that a variation  on
the  situation  notation  of  (McCarthy  and  Hayes, 1969) permits an
elegant approach, and relates this problem to the frame problem.


AIM-154,  Zohar  Manna,  Stephen  Ness  and Jean Vuillemin, INDUCTIVE
	METHODS  FOR  PROVING  PROPERTIES OF PROGRAMS, November 1971,
	24 pages.

We  have  two  main  purposes  in  this paper.  First, we clarify and
extend  known  results  about  computation  of  recursive   programs,
emphasizing  the  difference  between  the  theoretical and practical
approaches.  Secondly, we present and examine various  known  methods
for  proving  properties of recursive programs.  We discuss in detail
two  powerful  inductive   methods,   computational   induction   and
structural  induction,  illustrating  their  applications  by various
examples. We also briefly discuss some other related methods.

Our aim in this work is to introduce inductive methods to as  wide  a
class  of  readers  as  possible  and  to  demonstrate their power as
practical  techniques.  We  ask   the   forgiveness   of   our   more
theoretical-minded  colleagues  for  our occasional choice of clarity
over precision.


AIM-155, Jonathan Leonard Ryder, HEURISTIC ANALYSIS OF LARGE TREES AS
	GENERATED IN THE GAME OF GO, December 1971, 300 pages.

The  Japanese  game  of  Go  is  of  interest  both  as  a problem in
mathematical repesentation and as a game which generates a move  tree
with  an  extraordinarily  high branching factor (100 to 300 branches
per ply).  The complexity of Go (and the difficulty of Go  for  human
players)  is  thought  to be considerably greater than that of chess.
The constraints of being able to play a complete game  and  of  being
able to produce a move with a moderate amount of processing time were
placed on the solution.

The basic approach  used  was  to  find  methods  for  isolating  and
exploring  several  sorts  of relevant subsections of the global game
tree.  This process depended heavily on the  ability  to  define  and
manipulate  the  entities of Go as recursive functions rather than as
patterns of stones. A general machine-accessible  theory  of  Go  was
developed to provide context for program evaluations.

A  program  for  playing  Go  is now available on the Stanford PDP-10
computer.  It will play a  complete  game,  taking  about  10  to  30
seconds for an average move.  The quality of play is better than that
of a beginner in many respects, but incompletenesses in  the  current
machine-representable  theory  of Go prevent the present program from
becoming a strong player.


AIM-156,  Kenneth Mark Colby, Franklin Dennis Hilf, Sylvia Weber, and
	Helena C. Kraemer,  A  RESEMBLANCE TEST  FOR  THE  VALIDATION
	OF   A   COMPUTER   SIMULATION   OF   PARANOID     PROCESSES,
	November 1971, 29 pages.

A computer simulation of paranoid processes in the form of a dialogue
algorithm  was  subjected to a validation study using an experimental
resemblance test in which judges rate  degrees of paranoia present in
initial  psychiatric  interviews  of  both  paranoid  patients and of
versions of the paranoid model.  The statistical results  indicate  a
satisfactory   degree  of  resemblance  between  the  two  groups  of
interviews. It is concluded that  the  model  provides  a  successful
simulation of naturally occuring paranoid processes.


AIM-157, Yorick Wilks, ONE SMALL HEAD -- SOME REMARKS ON THE  USE  OF
	`MODEL' IN LINGUISTICS, December 1971, 17 pages.

I argue that the present situation in formal linguistics, where  much
new  work  is presented as being a "model of the brain", or of "human
language behavior", is  an  undesirable  one.   My  reason  for  this
judgement  is  not  the  conservative  (Braithwaitian)  one  that the
entities in question are not  really  models  but  theories.   It  is
rather that they are called models because they cannot be theories of
the brain at the present stage of brain research, and hence that  the
use  of  "model"  in  this  context  is  not  so much aspirational as
resigned about our total  ignorance  of  how  the  brain  stores  and
processes   linguistic   information.  The  reason  such  explanatory
entities cannot be theories is  that  this  ignorance  precludes  any
"semantic  ascent" up the theory; i.e., interpreting the items of the
theory in terms of observables.  And the brain items,  whatever  they
may  be,  are  not,  as  Chomsky  has  sometimes claimed, in the same
position as the "occult entities" of Physics  like  Gravitation;  for
the brain items are not theoretically unreachable, merely unreached.

I  then  examine  two  possible  alternate  views  of what linguistic
theories should be proffered as theories of:   theories  of  sets  of
sentences, and theories of a particular class of algorithms.  I argue
for a form of the latter view, and that  its  acceptance  would  also
have the effect of making Computational Linguistics a central part of
Linguistics, rather than the poor relation it is now.

I examine a distinction among "linguistic models"  proposed  recently
by   Mey.   who   was   also  arguing  for  the  self-sufficiency  of
Computational Linguistics, though as a "theory  of  performance".   I
argue  that  his  distinction  is  a  bad one, partly for the reasons
developed above and partly because he attempts to tie it to Chomsky's
inscrutable  competance-performance distinction.  I conclude that the
independence and self-sufficiency of  Computational  Linguistics  are
better supported by the arguments of this paper.


AIM-158, Zohar Manna, Ashok Chandra, PROGRAM SCHEMAS  WITH  EQUALITY,
	December 1971, 22 pages.

We discuss the class  of  program  schemas  augmented  with  equality
tests,  that  is, tests of equality between terms.  In the first part
of the paper we illustrate the "power" of equality tests.   It  turns
out  that the class of program schemas with equality is more powerful
than  the  "maximal"  classes   of   schemas   suggested   by   other
investigators.  In  the  second  part  of  the  paper, we discuss the
decision problems of program schemas with equality. It is shown,  for
example,  that  while  the  decision problems normally considered for
schemas (such as halting, divergence,  equivalence,  isomorphism  and
freedom)   are   decidable   for  ianov  schemas.   They  all  become
undecidable  if  general  equality  tests  are  added.   We  suggest,
however,  limited  equality  tests  which  can  be  added  to certain
subclasses of program schemas  while  preserving  their  decidability
property.

                                1972


AIM-159,  J.A.  Feldman  and  Paul  C.  Shields, TOTAL COMPLEXITY AND
	INFERENCE OF BEST PROGRAMS, April 1972.

Axioms  for  a  total  complexity  measure  for abstract programs are
presented. Essentially, they require  that  total  complexity  be  an
unbounded  increasing  function  of  the Blum time and size measures.
Algorithms for finding the  best  program  on  a  finite  domain  are
presented,   and   their   limiting  behavior  for  infinite  domains
described.  For total complexity, there are important senses in which
a machine can find the best program for a large class of functions.


AIM-160,  J. Feldman, AUTOMATIC PROGRAMMING, February 1972, 20 pages.

The revival of interest in Automatic Programming is  considered.  The
research  is divided into direct efforts and theoretical developments
and the successes and prospects of each are described.


AIM-161, Y. Wilks, AN ARTIFICIAL  INTELLIGENCE  APPROACH  TO  MACHINE
	TRANSLATION, February 1972, 44 pages.

The paper describes a system of  semantic  analysis  and  generation,
programmed  in  LISP  1.5  and designed to pass from paragraph length
input in English to French via  an  interlingual  representation.   A
wide class of English input forms will be covered, but the vocabulary
will initially be restricted to one of a  few  hundred  words.   With
this  subset  working,  and  during the current year (1971-72), it is
also hoped to map the interlingual representation onto some predicate
calculus notation so as to make possible the answering of very simple
questions about the translated  matter.   The  specification  of  the
translation system itself is complete, and its main points are:
i) It translates phrase by  phrase--with  facilities  for  reordering
phrases  and  establishing  essential semantic connectivities between
them--by mapping complex semantic stuctures of  "message"  onto  each
phrase.  These  constitute  the  interlingual  representation  to  be
translated.  This matching is done without  the  explicit  use  of  a
conventional  syntax  analysis,  by taking as the appropriate matched
structure the "most dense" of  the  alternative  structures  derived.
This  method  has been found highly successful in earlier versions of
this analysis system.

ii) The French output strings are generated without the explicit  use
of  a  generative  grammar.   That  is  done by means of STEREOTYPES:
strings of French words, and functions evaluating  to  French  words,
which are attached to English word senses in the dictionary and built
into the interlingual representation by the analysis  routines.   The
generation  program thus receives an interlingual representation that
already contains both  French  output  and  implicit  procedures  for
assembling  the output, since the stereotypes are in effect recursive
procedures specifying the content and production of the  output  word
strings.   Thus  the  generation  program  at no:time consults a word
dictionary or inventory of grammar rules.

It  is  claimed that the system of notation and translation described
is a convenient one for expressing and handling the items of semantic
information that are ESSENTIAL to any effective MT system.  I discuss
in some detail the semantic information needed to ensure the  correct
choice  of output prepositions in French; a vital matter inadequately
treated by virtually all previous formalisms and projects.


AIM-162, Roger Schank, Neil Goldman, Chuck  Reiger,  Chris  Reisbeck,
	PRIMITIVE    CONCEPTS    UNDERLYING    VERBS    OF   THOUGHT,
	April 1972, 102  pages.

In  order  to  create  conceptual  structures  that will uniquely and
unambiguously represent the meaning of an utterance, it is  necessary
to  establish  `primitive'  underlying  actions and states into which
verbs can be mapped.  This paper presents analyses of the most common
mental  verbs in terms of such primitive actions and states. In order
to represent the way people speak about their  mental  processes,  it
was  necessary  to  add  to  the  usual ideas of memory structure the
notion of Immediate Memory.  It is then argued that  there  are  only
three primitive mental ACTs.


AIM-163, J.M. Cadiou, RECURSIVE DEFINITIONS OF PARTIAL FUNCTIONS  AND
	THEIR COMPUTATIONS, April 1972, 160 pages.

A formal syntactic and semantic model  is  presented  for  `recursive
definitions'  which  are  generalizations of those found in LISP, for
example.   Such  recursive  definitions  can  have  two  classes   of
fixpoints,  the  strong  fixpoints  and  the weak fixpoints, and also
possess a class of computed partial functions.

Relations between these classes are presented:  fixpoints  are  shown
to  be  extensions  of  computed  functions.   More precisely, strong
fixpoints are shown to be extensions of computed functions  when  the
computations   may   involve  "call  by  name"  substitutions;  weak
fixpoints are shown to be extensions of computed functions  when  the
computation  only  involve  "call  by  value"   substitutions.    The
Church-Rosser  property for recursive definitions with fixpoints also
follows from these results.

Then conditions are given on the recursive definitions to ensure that
they possess least fixpoints (of both classes), and computation rules
are given for computing these two fixpoints:  the "full"  computation
rule,  which  leads  to  the least weak fixpoint.  A general class of
computation rules, called `safe innermost', also lead to  the  latter
fixpoint.   The "leftmost innermost" rule is a special case of those,
for the LISP recursive definitions.


AIM-164,  Zohar Manna  and  Jean  Vuillemin, FIXPOINT APPROACH TO THE
	THEORY OF COMPUTATION, April 1972, 29 pages. 

Substantial effort has been put into finding verification  techniques
for  computer programs. There are now so many verification techniques
that a choice in a practical situation may appear  difficult  to  the
non-specialist.  In particular, the most widely used methods, such as
the  "inductive  assertion  method",  can  be  used  to  prove   some
input-output  relation  (assuming  that the program terminates) while
such  problems  as  termination  or  equivalence  usually  require  a
different type of technique.

Our  main purpose in this paper is to suggest that Scott's fixedpoint
approach to the semantics of programming languages frees us from that
dilemma.  It allows one not only to justify all existing verification
techniques, but also to extend them to  handle  other  properties  of
computer  programs,  including  termination  and  equivalence,  in  a
uniform manner.

AIM-165, D.A. Bochvar, TWO  PAPERS  ON  PARTIAL  PREDICATE  CALCULUS,
	April 1972, 50 pages.

The three-valued system to which this study is devoted is of interest
as a logical calculus  for  two  reasons:   first,  it  is  based  on
formalization  of  certain  basic  and  intuitively obvious relations
satisfied by the predicates  "true",  "false"  and  "meaningless"  as
applied  to  propositions,  and  as  a  result the system possesses a
clear-cut  and  intrinsically  logical  interpretation;  second,  the
system  provides  a  solution  to  a  specifically  logical  problem,
analysis  of  the  paradoxes  of  classical  mathematical  logic,  by
formally proving that certain propositions are meaningless.

The  paper  consists  of  three  parts.   In the first we develop the
elementary part of the  system--the  propositional  calculus--on  the
basis of intuitive considerations.  In the second part we outline the
"restricted" functional calculus corresponding to  the  propositional
calculus.   The third and last part uses a certain "extension" of the
functional  calculus  to   analyze   the   paradoxes   of   classical
mathematical logic.

We  are  indebted to Professor V.I. Glivenko for much valuable advice
and criticism.  In particular, he provided a more suitable definition
of the function a(see I, Section 2, subsection 1.).

AIM 166, Lynn H. Quam, Sidney Liebes, Jr., Robert B.  Tucker,  Marsha
	Jo  Hannah,  Botond G. Eross,   COMPUTER  INTERACTIVE PICTURE
	PROCESSING, April 1972, 40 pages.

This  report  describes  work  done  in  image  processing  using  an
interactive  computer  system.  Techniques for image differencing are
described and examples using images returned from Mars by the Mariner
Nine  spacecraft are shown.  Also described are techniques for stereo
image processing.  Stereo processing  for  both  conventional  camera
systems and the Viking 1975 Lander camera system is reviewed.

AIM-167,  Ashok  Chandra,  EFFICIENT  COMPILATION OF LINEAR RECURSIVE
	PROGRAMS, June 1972, 43 pages.

We  consider  the  class  of  linear  recursive  programs.   A linear
recursive program is a set of procedures  where  each  procedure  can
make   at   most   one   recursive   call.   The  conventional  stack
implementation of recursion requires time and space both proportional
to  n,  the  depth  of  recursion.    It  is  shown  that in order to
implement linear recursion so as to execute in  time  n  one  doesn't
need space proportional to n:  n**ε for sufficiently small ε will do.
It is also known that with constant space one  can  implement  linear
recursion  in  time  n**2.   We  show  that  one  can do much better:
n**(1+ε) for arbitrarily small ε.  We also describe an algorithm that
lies between these two:  it takes time n*log n and space log n.

It  is  shown that several problems are closely related to the linear
recursion problem, for example, the problem  of  reversing  an  input
tape given a finite automaton with several one-way heads.  By casting
all these problems  into  canonical  form,  efficient  solutions  are
obtained simultaneously for all.

AIM-168,  Shigeru Igarashi, ADMISSIBILITY OF FIXED-POINT INDUCTION IN
	FIRST-ORDER LOGIC OF TYPED THEORIES, May 1972, 40 pages.

First-order logic is extended so as  to  deal  with  typed  theories,
especially  that  of  continuous functions with fixed-point induction
formalized by D. Scott.  The translation of his formal system, or the
λ calculus-oriented system derived and implemented by R. Milner, into
this logic amounts to adding predicate calculus features to them.

In such a logic the fixed-point induction axioms are no longer valid,
in  general,  so  that  we characterize formulas for which Scott-type
induction is applicable, in terms of syntax which can be checked
by machines automatically.

Diskfile: Aim168.igr[aim,doc.)

AIM-169, Robin Milner, LOGIC FOR COMPUTABLE  FUNCTIONS.   DESCRIPTION
	OF A MACHINE IMPLEMENTATION, May 1972, 36 pages.

This paper is primarily a user's manual  for  LCF,  a  proof-checking
program for a logic of computable functions proposed by Dana Scott in
1969, but unpublished by him.  We use the name LCF also for the logic
itself,   which  is  presented  at  the  start  of  the  paper.   The
proof-checking program is designed to allow the user interactively to
generate  formal  proofs  about  computable functions and functionals
over a variety  of  domains,  including  those  of  interest  to  the
computer   scientist--for   example,  integers,  lists  and  computer
programs and their semantics. The user's task is  alleviated  by  two
features:   a  subgoaling  facility  and  a  powerful  simplification
mechanism.  Applications include proofs of program correctness and in
particular  of  compiler  correctness;  these  applications  are  not
discussed herein, but are illustrated in the papers referenced in the
introduction.

Diskfile: Lcfman.rgm[aim,doc]

AIM-170, Yorick Wilks,  LAKOFF  ON  LINGUISTICS  AND  NATURAL  LOGIC,
	June 1972, 19 pages.

The paper examines and criticises Lakoff's notions of a natural logic
and  of  a  generative semantics described in terms of logic, I argue
that  the  relationship  of  these  notions  to  logic  as   normally
understood  is  unclear, but I suggest, in the course of the paper, a
number of  possible  interpretations  of  his  thesis  of  generative
semantics.   I  argue  further  that  on these interpretations a mere
notational variant of Chomskyan theory, I argue, too,  that  Lakoff's
work  may  provide  a  service  in  that it constitutes a reductio ad
absurdum of the derivational  paradigm  of  modern  linguistics;  and
shows,  inadvertently,  that  only  a  system  with  the  ability  to
reconsider its own inferences can do the job that Lakoff sets up  for
linguistic   enquirey -- that   is   to   say,  only  an  "artificial
intelligence" system.

Diskfile: Lakoff.Yaw[aim,doc]


AIM-171, Roger Schank, ADVERBS AND BELIEF, June 1972, 30 pages.

The   treatment   of   a  certain  class  of  adverbs  in  conceptual
representation  is  given.   Certain  adverbs   are   shown   to   be
representative  of complex belief structures.  These adverbs serve as
pointers that explain where the sentence that they modify belongs  in
a belief structure.

AIM-172, Sylvia Weber Russell, SEMANTIC CATEGORIES OF NOMINALS FOR
	CONCEPTUAL   DEPENDENCY   ANALYSIS   OF   NATURAL   LANGUAGE,
	July 1972, 64 pages.

A system  for  the  semantic  categorization  of  conceptual  objects
(nominals)  is  provided.   The  system  is  intended to aid computer
understanding of  natural  language.   Specific  implementations  for
"noun-pairs" and prepositional phrases are offered.

AIM-173, Gerald Jacob Agin,  REPRESENTATION AND DESCRIPTION OF CURVED
         OBJECTS, October 1972, 

This  document  describes  some  first  efforts  toward  the  goal of
description and recognition of the general  class  of  curved,  three
dimensional objects.

Three dimensional images, similar to depth maps, are obtained with  a
triangulation  system  using  a  television camera, and a deflectable
laser beam diverged into a plane by a cylindrical lens.

Complex  objects  are  represented as structures joining parts called
generalized cylinders.  These primitives are formalized in  a  volume
representation  by  an  arbitrary cross section varying along a space
curve axis.  Several types of joint structures are discussed.

Experimental  results  are  shown  for  the  description (building of
internal computer models) of a handful of complex objects,  beginning
with  laser  range  data  from  actual  objects.   our  programs have
generated complete  descriptions  of  rings,  cones,  and  snake-like
objects, all of which may be described by a single primitive. Complex
objects, such as dolls, have been segmented into parts, most of which
are well described by programs which implement generalized cylinder
descriptions.


AIM-174,  Morris, Francis Lockwood,  CORRECTNESS  OF  TRANSLATIONS OF
         PROGRAMMING LANGUAGES -- AN ALGEBRAIC APPROACH, August 1972,
         124 p.

Programming languages and their sets of meanings can be  modelled  by
general operator algebras; semantic functions and compiling functions
by  homomorphisms  of  operator  algebras.   A  restricted  class  of
individual  programs, machines, and computations can be modelled in a
uniform manner by binary relational algebras.  A restricted class  of
individual   manner   by   binary  relational  algebras.   These  two
applications of algebra to computing  are  compatible:  the  semantic
function  provided  by interpreting ("running") one binary relational
algebra on another is a homomorphism on  an  operator  algebra  whose
elements are binary relational algebras.

Using these mathematical tools, proofs can be provided systematically
of the correctness of compilers for fragmentary programming languages
each embodying a single language  "feature".   Exemplary  proofs  are
given   for  statement  sequences,  arithmetic  expressions,  Boolean
expressions, assignment statements, and while  statement.   Moreover,
proofs of this sort can be combined to provide (synthetic) proofs for
in principle, many different  complete  programming  languages.   One
example of such a synthesis is given.


AIM-175, Tanaka, Hozumi, HADAMARD TRANSFORM FOR SPEECH WAVE ANALYSIS,
         August 1972, 34 pages.
          (forthcoming)

Two methods of speech wave analysis using the Hadamard transform  are
discussed.   The first method is a direct application of the Hadamard
transform for speech waves.   The  reason  this  method  yields  poor
results  is  discussed.   The second method is the application of the
Hadamard transform to a log-magnitude frequency spectrum.  After  the
application  of  the  Fourier  transform  the  Hadamard  transform is
applied to detect a pitch period or to get a smoothed spectrum.  This
method  shows some positive aspects of the Hadamard transform for the
analysis of a speech wave with regard to the reduction of  processing
time required for smoothing, but at the cost of precision.  A formant
tracking program for voiced  speech  is  implemented  by  using  this
method and an edge following technique used in scene analysis.


AIM-176, Feldman, J.A.,  Low, J.R.,  Swinehart, D.C.,  Taylor, R. H.,
         RECENT DEVELOPMENTS IN SAIL.  AN ALGOL-BASED  LANGUAGE  FOR
         ARTIFICIAL INTELLIGENCE, (forthcoming).

AIM-177, Paul,   Richard,   MODELLING,  TRAJECTORY  CALCULATION  AND
         SERVOING OF A COMPUTER CONTROLLED ARM, (forthcoming)


AIM-178,  Aharon Gill,   VISUAL  FEEDBACK  AND  RELATED  PROBLEMS  IN
        COMPUTER  CONTROLLED  HAND  EYE  COORDINATION, October 1972,
	134 pages.

A set of programs for precise manipulation of simple  planar  bounded
objects,  by  means  of visual feedback, was developed for use in the
Stanford hand-eye system.  The  system  includes  a  six  degrees  of
freedom  computer  controlled  manipulator (arm and hand) and a fully
instrumented computer television camera.

The  image  of  the  hand  and manipulated objects is acquired by the
computer through the camera.  The stored image is  analyzed  using  a
corner  and  line  finding  program  developed for this purpose.  The
analysis is simplified by using all the information  available  about
the  objects  and  the  hand,  and  previously  measured coordination
errors.   Simple  touch  and  force  sensing  by  the  arm  help  the
determination of three dimensional positions from one view.

The  utility  of  the information used to simplify the scene analysis
depends on the accuracy of the geometrical models of the  camera  and
arm.   A  set  of  calibration  updating  techniques and programs was
developed to maintain the accuracy of the camera model relative to
the arm model.

The precision obtained is better than .1 inch.  It is limited by  the
resolution of the imaging system and of the arm position measuring
system.



AIM-179, Baumgart, Bruce G.,  WINGED  EDGE POLYHEDRON REPRESENTATION, 
         October 1972, 46 pages.

(forthcoming)


AIM-180, Bajcsy, Ruzena,  COMPUTER  IDENTIFICATION OF TEXTURED VISUAL
         SCENES, October 1972, 156 pages.

This work deals with computer analysis  of  textured  outdoor  scenes
involving grass, trees, water and clouds. Descriptions of texture are
formalized from natural language descriptions; local descriptors  are
obtained  from  the directional and non-directional components of the
Fourier transform power spectrum. Analytic expressions  are  obtained
for  orientation, contrast, size, spacing, and in periodic cases, the
locations of texture elements.  These local descriptors  are  defined
over  windows  of  various  sizes;  the  choice of sizes is made by a
simple higher-level program.

The  process  of region growing is represented by a sheaf-theoretical
model which formalizes the operation of pasting local structure (over
a  window)  into  global  structure  (over  a  region). Programs were
implemented which form regions of similar color and  similar  texture
with respect to the local descriptors.

An interpretation is made of texture gradient as  distance  gradient
in  space.   A simple world model is described.  An interpretation of
texture regions  and  texture  gradient  is  made  with  a  simulated
correspondence  with the world model.  We find that a problem-solving
approach, involving hypothesis-verification, more  satisfactory  than
an  earlier pattern recognition effort (Bajcsy 1970) and more crucial
to work with complex scenes than in scenes of  polyhedra.   Geometric
clues  from  relative sizes, texture gradients, and interposition are
important in interpretation.

AIM-181, Buchanan, Bruce G, REVIEW OF HUBERT DREYFUS' WHAT COMPUTERRS
         CAN'T DO: A CRITIQUE OF ARTIFICIAL REASON (Harper & Row, New
         York, 1972), November 1972, 14 pp.

The recent book "What Computers Can't Do" by  Hubert  Dreyfus  is  an
attack	on  artificial  intelligence research.  This review takes the
position that the philosophical content of the book  is  interesting,
but that the attack on artificial intelligence is not well reasoned.


AIM-182, Colby, K.M., Hilf, F.D. CAN EXPERT JUDGES, USING TRANSCRIPTS
         OF   TELETYPED  PSYCHIATRIC  INTERVIEWS,  DISTINGUISH  HUMAN
         PARANOID PATIENTS FROM A  COMPUTER  SIMULATION  OF  PARANOID
         PROCESSES?,  December l972, 8 pp.

Expert  judges,  psychiatrists  and  computer  scientists,  could not
correctly distinguish a simulation model of paranoid  processes  from
actual paranoid patients.

.END
.SS BUDGET
.BEGIN VERBATIM

                                             15-JUN-73 	  1-JUL-74
                                               thru         thru
                                             30-JUN-74	 30-JUN-75
	
I.  TOTAL SALARIES (detail below)	     $ 552,201	 $ 547,669


II.  STAFF BENEFITS
	
	16% to 8-31-73		
	17.3% 9-1-73 to 8-31-74        	        93,972
	18.3% 9-1-74 on        				    99,083

III. TRAVEL					30,700	    30,700


IV.  CAPITAL EQUIPMENT				75,000	    75,000


 V.  EQUIPMENT RENTAL				19,774	    19,774


VI.  EQUIPMENT MAINTENANCE			40,320	    40,320


VII. COMMUNICATIONS				16,200	    16,200
       (Telephones, dataphones, teletypes)
	
VIII. PUBLICATIONS COST (Past Experience)	13,500	    13,500

IX.   OTHER OPERATING EXPENSES			44,358	    43,779
	(e.g. office supplies, postage,
	freight, consulting, utilties)

X.    INDIRECT COSTS -  46% of NTDC		363,975	   363,975
	 (NTDC = I+II+III+VI+VII+VIII+IX)



  TOTAL ARTIFICIAL INTELLIGENCE BUDGET ---- $ 1,250,000  1,250,000
.SKIP TO COLUMN 1
ARTIFICIAL INTELLIGENCE SALARIES	    15-JUN-73     1-JUL-74
					       thru	    thru
					    30-JUN-74    30-JUN-75
  Faculty

      Feldman, Jerome			       $3,626       $3,626
	Associate Prof.
	17% Acad. Yr., 17% Summer

      Green, Cordell			       12,082	    12,082
	Assistant Prof.
	50% Acad. Yr., 100% Summer

      Hoare, C.A.R.				3,472		 0
	Visiting Associate Prof.
	50% Fall Qtr.'73 

      McCarthy, John			       19,582	    19,582
	Professor
	50% Acad. Yr., 100% Summer

      Winograd, Terry				7,002	     7,002
	Visiting Assistant Prof.
	50% Acad. Yr., 50% Summer

      TOTAL Faculty SALARIES		      $45,764	   $42,292

  Research Staff

      Allen, John			       15,000	    15,000
	Research Associate

      Binford, Thomas O.		       18,600	    18,600
	Research Associate

      Diffie, Whit				6,180		 0
	Research Programmer

      Earnest, Lester D.		       26,964	    26,964
	Res. Comp. Sci.

      Garland, Steve				6,996		 0
	Research Associate (Autumn 1973)

      Gorin, Ralph			       12,960	    12,960
	Systems Programmer

      Helliwell, Richard P.		       10,500	    10,500
	Systems Programmer

      Holloway, John			       15,000	    15,000
	Design Engineer
.SKIP TO COLUMN 1
 Research Staff (Continued)		    15-JUN-73	  1-JUL-74
					       thru	    thru
					    30-JUN-74    30-JUN-75

      Luckham, David			      $20,280      $20,280
	Research Computer Scientist

      Miller, Neil			       13,680	    13,680
	Research Associate

      Panofsky, Edward  F.		       12,960       12,960
	Computer Systems Engineer

      Paul, Richard P.			       16,140       16,140
	Research Programmer

      Pingle, Karl K.			       15,660       15,660
	Research Programmer

      Quam, Lynn			       18,000       18,000
	Research Associate

      Samuel, Arthur L.				6,666        6,666
	Senior Res. Assoc., 25% time

      Scheinman, Victor			       16,560       16,560
	Design Engineer

      Tesler, Larry			       16,680       16,680
	Systems Programmer

      Thosar, Ravindra				3,300		 0
	Research Associate

      Weyhrauch, Richard		       13,200       13,200
	Research Associate

      Wright, Fred			       12,000       12,000
	Systems Programmer

  TOTAL Research Staff SALARIES              $277,326     $260,850


  Student Research Assistants
	50% Acad. Yr., 100% Summer
	 unless noted otherwise

      Baumgart, Bruce G.			5,538	     5,538

      Bolles, Robert				5,205	     5,304

      Borning, Alan			       $5,004       $5,205
,SKIP TO COLUMN 1
Student Research Assistants (Cont.)	    15-JUN-73     1-JUL-74
					       thru	    thru
                                            30-JUN-74    30-JUN-75

      Davis, Randall                            5,304        5,304

      Frost, Martin                             5,070        5,070

      Ganapathy, Kicha                          5,304        5,304

      Gennery, Donald                           5,070        5,070

      Hemphill, Linda                           5,538        5,538

      Hui, Wing                                 4,914        2,816

      Karp, Peggy                               5,070        5,070

      Lenat, D. B.                              5,070        5,070

      Low, J. Richard                           5,538        5,538

      Moorer, James A.                          5,538        5,538

      Morales, Jorge                            5,070        5,070

      Nevatia, Ramakant                         5,538        5,538

      Newey, Malcolm C.                         5,304        5,304

      Poole, David                              5,304        5,304

      Riesbeck, Christopher                     5,538        5,538

      Samet, Hanan                              5,304        5,304

      Suzuki, Norihisa                          6,435        5,070

      Taylor, Russell H.                        3,345        3,345

      Thomas, Arthur J.                         5,070        5,070

      Wagner, Todd J.                           4,914        4,914

      Yu, Frank S.                              4,914        4,914

      -------                                   4,914            0

TOTAL Student Research Assistant SALARIES    $129,813     $121,736
.SKIP TO COLUMN 1
						15-JUN-73    1-JUL-74
						   thru	       thru
                                                30-JUN-74   30-JUN-75
Others

      Barnett, Barbara Ann                          8,017       8,017
	Secretary, 96% time

      Baur, Queenette                               8,352       8,352
	Secretary, 96% time

      Briggs, Norman R.                            13,500      13,500
	Research Coordinator, 90% time

      Enderlein, Trina                              1,602       1,602
	Secretary, 25% time

      Gafford, Thomas                               4,380       4,380
	Electronic Tech., 50% time

      Langle, Robert                                  840         840
	Administrator, 5% time

      Stuart, Elbridge                              7,776       7,776
	Electronic Tech.

      Wilson, Charles                               8,568       8,568
	Electronic Tech.

      Wood, Patricia                                6,624       6,624
	Secretary, 96% time

      Zingheim, Thomas J.                          10,944      10,944
	Electronic Tech.

      -----,                                        2,400       2,400
	Hourly Tech.

      TOTAL Others SALARIES                       $73,003     $73,003


   SUBTOTAL ARTIFICIAL INTELLIGENCE SALARIES     $525,906    $497,881
    Projected Salary Increases  at 5%/Yr.          26,295      49,788
      TOTAL ARTIFICIAL INTELLIGENCE SALARIES     $552,201    $547,669
.END
.SEC HEURISTIC PROGRAMMING PROJECT
.SEC NETWORK PROTOCOL DEVELOPMENT PROJECT
.SEC TOTAL BUDGET
.SEC COGNIZANT PERSONNEL
.BEGIN VERBATIM

	For contractual matters:

		Office of the Research Administrator
		Stanford University
		Stanford, California 94305

		Telephone: (415) 321-2300, ext. 2883

	For technical and scientific matters regarding the
	Artificial Intelligence Project:

		Prof. John McCarthy
		Computer Science Department
		Telephone: (415) 321-2300, ext. 4971

	For administrative matters regarding the Artificial
	Intelligence Project, including questions relating
	to the budget or property acquisition:

		Mr. Lester D. Earnest
		Mr. Norman Briggs
		Computer Science Department
		Telephone: (415) 321-2300, ext. 4971

	For technical, scientific, and budgetary matters
	regarding the Heuristic Programming Project:

		Prof. Edward  Feigenbaum
		Computer Science Department
		Telephone:  (415) 321-2300, ext. 4878

		Prof. Joshua Lederberg
		Genetics Department
		Telephone:  (415) 321-2300, ext. 5052

	For technical, scientific, and budgetary matters
	regarding the Network Protocol Development Project:

		Prof. Vinton Cerf
		Stanford Electronics Laboratories
		Telephone: (415) 321-2300, ext. 73458
.END
.BACK